Re: [PATCH 2/3] x86/flush_tlb: try flush_tlb_single one by one inflush_tlb_range

From: Alex Shi
Date: Wed May 02 2012 - 07:38:57 EST


On 05/02/2012 05:38 PM, Borislav Petkov wrote:

> On Wed, May 02, 2012 at 05:24:09PM +0800, Alex Shi wrote:
>> For some of scenario, above equation can be modified as:
>> (512 - X) * 100ns(assumed TLB refill cost) = X * 140ns(assumed invlpg cost)
>>
>> When thread number less than cpu numbers, balance point can up to 1/2
>> TLB entries.
>>
>> When thread number is equal to cpu number with HT, on our SNB EP
>> machine, the balance point is 1/16 TLB entries, on NHM EP machine,
>> balance at 1/32. So, need to change FLUSHALL_BAR to 32.
>
> Are you saying you want to have this setting per family?


Set it according to CPU type is more precise, but looks ugly. I am
wondering if it worth to do. Maybe conservative selection is acceptable?

>

> Also, have you run your patches with other benchmarks beside your
> microbenchmark, say kernbench, SPEC<something>, i.e. some other
> multithreaded benchmark touching shared memory? Are you seeing any
> improvement there?


I tested oltp reading and specjbb2005 with openjdk. They should not much
flush_tlb_range calling. So, no clear improvement.
Do you know benchmarks which cause enough flush_tlb_range?

>
>> when thread number is bigger than cpu number, context switch eat all
>> improvement. the memory access latency is same as unpatched kernel.
>
> Also, how do you know in the kernel that the thread number is the number
> of all threads touching this shared mmapped region - there could be
> unrelated threads doing something else.


Believe we didn't need to know this, much more thread number just weaken
and cover the improvement. When the thread number goes down, the
performance gain appears. So, don't need care this.

Any more comments for this patchset?

>
> Thanks.
>


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/