Re: [PATCH] Linux VM workaround for Knights Landing A/D leak

From: Nadav Amit
Date: Tue Jun 14 2016 - 22:44:29 EST


Andy Lutomirski <luto@xxxxxxxxxxxxxx> wrote:

> On Tue, Jun 14, 2016 at 7:35 PM, Nadav Amit <nadav.amit@xxxxxxxxx> wrote:
>> Andy Lutomirski <luto@xxxxxxxxxxxxxx> wrote:
>>
>>> On Tue, Jun 14, 2016 at 2:37 PM, Dave Hansen
>>> <dave.hansen@xxxxxxxxxxxxxxx> wrote:
>>>> On 06/14/2016 01:16 PM, Nadav Amit wrote:
>>>>> Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> wrote:
>>>>>
>>>>>> On 06/14/2016 09:47 AM, Nadav Amit wrote:
>>>>>>> Lukasz Anaczkowski <lukasz.anaczkowski@xxxxxxxxx> wrote:
>>>>>>>
>>>>>>>>> From: Andi Kleen <ak@xxxxxxxxxxxxxxx>
>>>>>>>>> +void fix_pte_leak(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
>>>>>>>>> +{
>>>>>>> Here there should be a call to smp_mb__after_atomic() to synchronize with
>>>>>>> switch_mm. I submitted a similar patch, which is still pending (hint).
>>>>>>>
>>>>>>>>> + if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids) {
>>>>>>>>> + trace_tlb_flush(TLB_LOCAL_SHOOTDOWN, TLB_FLUSH_ALL);
>>>>>>>>> + flush_tlb_others(mm_cpumask(mm), mm, addr,
>>>>>>>>> + addr + PAGE_SIZE);
>>>>>>>>> + mb();
>>>>>>>>> + set_pte(ptep, __pte(0));
>>>>>>>>> + }
>>>>>>>>> +}
>>>>>>
>>>>>> Shouldn't that barrier be incorporated in the TLB flush code itself and
>>>>>> not every single caller (like this code is)?
>>>>>>
>>>>>> It is insane to require individual TLB flushers to be concerned with the
>>>>>> barriers.
>>>>>
>>>>> IMHO it is best to use existing flushing interfaces instead of creating
>>>>> new ones.
>>>>
>>>> Yeah, or make these things a _little_ harder to get wrong. That little
>>>> snippet above isn't so crazy that we should be depending on open-coded
>>>> barriers to get it right.
>>>>
>>>> Should we just add a barrier to mm_cpumask() itself? That should stop
>>>> the race. Or maybe we need a new primitive like:
>>>>
>>>> /*
>>>> * Call this if a full barrier has been executed since the last
>>>> * pagetable modification operation.
>>>> */
>>>> static int __other_cpus_need_tlb_flush(struct mm_struct *mm)
>>>> {
>>>> /* cpumask_any_but() returns >= nr_cpu_ids if no cpus set. */
>>>> return cpumask_any_but(mm_cpumask(mm), smp_processor_id()) <
>>>> nr_cpu_ids;
>>>> }
>>>>
>>>>
>>>> static int other_cpus_need_tlb_flush(struct mm_struct *mm)
>>>> {
>>>> /*
>>>> * Synchronizes with switch_mm. Makes sure that we do not
>>>> * observe a bit having been cleared in mm_cpumask() before
>>>> * the other processor has seen our pagetable update. See
>>>> * switch_mm().
>>>> */
>>>> smp_mb__after_atomic();
>>>>
>>>> return __other_cpus_need_tlb_flush(mm)
>>>> }
>>>>
>>>> We should be able to deploy other_cpus_need_tlb_flush() in most of the
>>>> cases where we are doing "cpumask_any_but(mm_cpumask(mm),
>>>> smp_processor_id()) < nr_cpu_ids".
>>>
>>> IMO this is a bit nuts. smp_mb__after_atomic() doesn't do anything on
>>> x86. And, even if it did, why should the flush code assume that the
>>> previous store was atomic?
>>>
>>> What's the issue being fixed / worked around here?
>>
>> It does a compiler barrier, which prevents the decision whether a
>> remote TLB shootdown is required to be made before the PTE is set.
>>
>> I agree that PTEs may not be written atomically in certain cases
>> (although I am unaware of such cases, except on full-mm flush).
>
> How about plain set_pte? It's atomic (aligned word-sized write), but
> it's not atomic in the _after_atomic sense.

Can you point me to a place where set_pte is used before a TLB
invalidation/shootdown, excluding this patch and the fullmm case?

I am not claiming there is no such case, but I am unaware of such
one. PTEs are cleared on SMP using xchg, and similarly the dirty bit
is cleared with an atomic operation.