Re: [PATCH v4 10/10] x86/mm: Try to preserve old TLB entries using PCID
From: Andy Lutomirski
Date: Tue Jul 18 2017 - 13:06:45 EST
On Tue, Jul 18, 2017 at 1:53 AM, Ingo Molnar <mingo@xxxxxxxxxx> wrote:
>
> * Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>
>> On Wed, Jul 05, 2017 at 09:04:39AM -0700, Andy Lutomirski wrote:
>> > On Wed, Jul 5, 2017 at 5:18 AM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>> > > On Thu, Jun 29, 2017 at 08:53:22AM -0700, Andy Lutomirski wrote:
>> > >> @@ -104,18 +140,20 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
>> > >>
>> > >> /* Resume remote flushes and then read tlb_gen. */
>> > >> cpumask_set_cpu(cpu, mm_cpumask(next));
>> > >
>> > > Barriers should have a comment... what is being ordered here against
>> > > what?
>> >
>> > How's this comment?
>> >
>> > /*
>> > * Resume remote flushes and then read tlb_gen. We need to do
>> > * it in this order: any inc_mm_tlb_gen() caller that writes a
>> > * larger tlb_gen than we read here must see our cpu set in
>> > * mm_cpumask() so that it will know to flush us. The barrier
>> > * here synchronizes with inc_mm_tlb_gen().
>> > */
>>
>> Slightly confusing, you mean this, right?
>>
>>
>> cpumask_set_cpu(cpu, mm_cpumask()); inc_mm_tlb_gen();
>>
>> MB MB
>>
>> next_tlb_gen = atomic64_read(&next->context.tlb_gen); flush_tlb_others(mm_cpumask());
>>
>>
>> which seems to make sense.
>
> Btw., I'll wait for a v5 iteration before applying this last patch to tip:x86/mm.
I'll send it shortly. I think I'll also add a patch to factor out the
flush calls a bit more to prepare for Mel's upcoming fix.
>
> Thanks,
>
> Ingo