Re: [linus:master] [x86/mm/tlb] 7e33001b8b: will-it-scale.per_thread_ops 20.7% improvement

From: Linus Torvalds
Date: Sat Nov 30 2024 - 12:55:09 EST


On Sat, 30 Nov 2024 at 09:31, Rik van Riel <riel@xxxxxxxxxxx> wrote:
>
> 1) Stop using the mm_cpumask altogether on x86

I think you would still want it as a "this is the upper bound" thing -
exactly like your lazy code effectively does now.

It's not giving some precise "these are the CPU's that have TLB
contents", but instead just a "these CPU's *might* have TLB contents".

But that's a *big* win for any single-threaded case, to not have to
walk over potentially hundreds of CPUs when that thing has only ever
actually been on one or two cores.

Because a lot of short-lived processes only ever live on a single CPU.

The benchmarks you are optimizing for - as well as the ones that regress - are

(a) made up micobenchmark loads

(b) ridiculously many threads

and I think you should take some of what they say with a big pinch of salt.

Those "20% difference" numbers aren't actually *real*, is what I'm saying.

> 2) Instead, at context switch time just update
> per_cpu variables like cpu_tlbstate.loaded_mm
> and friends

See aboive. I think you'll still want to limit the actual real
situation of "look, ma, I'm a single-threaded compiler".

> 3) At (much rarer) TLB flush time:
> - Iterate over all CPUs

Change this to "iterate over mm_cpumask", and I think it will work a
whole lot better.

Because yes, clearly with just the *pure* lazy mm_cpumask, you won
some at scheduling time, but you lost a *lot* by just forcing
pointless stale IPIs instead.

Linus