Re: Current mainline git (24e700e291d52bd2) hangs when building e.g. perf
From: Andy Lutomirski
Date: Sun Sep 10 2017 - 00:42:54 EST
On Sat, Sep 9, 2017 at 12:37 PM, Borislav Petkov <bp@xxxxxxxxx> wrote:
> On Sat, Sep 09, 2017 at 12:28:30PM -0700, Andy Lutomirski wrote:
>> I propose the following fix. If PCID is on, then, in
>> enter_lazy_tlb(), we switch to init_mm with the no-flush flag set.
>> (And we give init_mm its own dedicated ASID to keep it simple and fast
>> -- no need to use the LRU ASID mapping to assign one dynamically.) We
>> clear the bit in mm_cpumask. That is, we more or less just skip the
>> whole lazy TLB optimization and rely on PCID CPUs having reasonably
>> fast CR3 writes. No extra IPIs. I suppose I need to benchmark this.
>> It will certainly slow down workloads that rapidly toggle between a
>> user thread and a kernel thread because it forces serialization on
>> each mm switch, but maybe that's not so bad.
>
> Sounds ok so far.
>
>> If PCID is off, then we leave the old CR3 value when we go lazy, and
>> we also leave the flag in mm_cpumask set. When a flush is requested,
>> we send out the IPI and switch to init_mm (and flush because we have
>> no choice). IOW, the no-PCID behavior goes back to what it used to
>> be.
>
> Ok, question: why can't we load the new CR3 value too, immediately? Or
> are we saying, we might get to return to the same CR3 we had before we
> were lazy so we won't need to do an unnecessary CR3 write with the same
> value. A microoptimization, if you will.
It is indeed a microoptimization, but it's a microoptimization that
we've had in the kernel for a long, long time.
But it may be an ill-advised microoptimization, or at least a poorly
implemented one historically. The microoptimization mostly affects
workloads that have a process on an otherwise idle CPU that frequently
sleeps for very short times. With the optimization, we avoid two TLB
flushes and two serializing instructions every time we sleep.
Historically, we got a bunch of useless IPIs, too, depending on the
workload.
The problem is that the implementation, which lives in
kernel/sched/core.c for the most part, involves some extra reference
counting, and there are NUMA workloads with many cores all running the
same mm that pay a *huge* cost in refcounting, since all the CPUs are
hammering the same refcount. And this refcount is (I think) basically
pointless on x86 and maybe on most architectures.
PeterZ and Ingo, would you be okay with adding a define so arches can
opt out of the task_struct::active_mm field entirely? That is, with
the option set, task_struct wouldn't have an active_mm field, the core
wouldn't call mmgrab and mmdrop, and the arch would be responsible for
that bookkeeping instead? x86, and presumably all arches without
cross-core invalidation, would probably prefer to just shoot down the
old mm entirely in __mmput() rather than trying to figure out when do
finish freeing old mms. After all, exit_mmap() is going to send an
IPI regardless, so I see no reason to have the scheduler core pin an
old dead mm just because some random kernel thread's active_mm field
points to it.
IOW, if I'm going to reintroduce something like what the old lazy mode
did on x86, I'd rather do it right.
--Andy