Re: [PATCH v2 00/10] PCID and improved laziness

From: Andy Lutomirski
Date: Mon Jun 19 2017 - 00:47:23 EST

On Sun, Jun 18, 2017 at 2:29 PM, Levin, Alexander (Sasha Levin)
<alexander.levin@xxxxxxxxxxx> wrote:
> On Tue, Jun 13, 2017 at 09:56:18PM -0700, Andy Lutomirski wrote:
>>There are three performance benefits here:
>>1. TLB flushing is slow. (I.e. the flush itself takes a while.)
>> This avoids many of them when switching tasks by using PCID. In
>> a stupid little benchmark I did, it saves about 100ns on my laptop
>> per context switch. I'll try to improve that benchmark.
>>2. Mms that have been used recently on a given CPU might get to keep
>> their TLB entries alive across process switches with this patch
>> set. TLB fills are pretty fast on modern CPUs, but they're even
>> faster when they don't happen.
>>3. Lazy TLB is way better. We used to do two stupid things when we
>> ran kernel threads: we'd send IPIs to flush user contexts on their
>> CPUs and then we'd write to CR3 for no particular reason as an excuse
>> to stop further IPIs. With this patch, we do neither.
>>This will, in general, perform suboptimally if paravirt TLB flushing
>>is in use (currently just Xen, I think, but Hyper-V is in the works).
>>The code is structured so we could fix it in one of two ways: we
>>could take a spinlock when touching the percpu state so we can update
>>it remotely after a paravirt flush, or we could be more careful about
>>our exactly how we access the state and use cmpxchg16b to do atomic
>>remote updates. (On SMP systems without cmpxchg16b, we'd just skip
>>the optimization entirely.)
> Hey Andy,
> I've started seeing the following in -next:
> ------------[ cut here ]------------
> kernel BUG at arch/x86/mm/tlb.c:47!


> Call Trace:
> flush_tlb_func_local arch/x86/mm/tlb.c:239 [inline]
> flush_tlb_mm_range+0x26d/0x370 arch/x86/mm/tlb.c:317
> flush_tlb_page arch/x86/include/asm/tlbflush.h:253 [inline]

I think I see what's going on, and it should be fixed in the PCID
series. I'll split out the fix.