RE: [PATCH] Linux VM workaround for Knights Landing A/D leak
From: Anaczkowski, Lukasz
Date: Wed Jun 15 2016 - 09:06:27 EST
From: Dave Hansen [mailto:dave.hansen@xxxxxxxxxxxxxxx]
Sent: Tuesday, June 14, 2016 7:20 PM
>> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
...
>> +extern void fix_pte_leak(struct mm_struct *mm, unsigned long addr,
>> + pte_t *ptep);
> Doesn't hugetlb.h somehow #include pgtable.h? So why double-define
> fix_pte_leak()?
It's other way round - pgtable.h somehow includes hugetlb.h. I've removed
duplicated fix_pte_leak() declaration.
>> diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h
>> index 2ee7811..6fa4079 100644
>> --- a/arch/x86/include/asm/pgtable_64.h
>> +++ b/arch/x86/include/asm/pgtable_64.h
>> @@ -178,6 +178,12 @@ extern void cleanup_highmap(void);
>> extern void init_extra_mapping_uc(unsigned long phys, unsigned long size);
>> extern void init_extra_mapping_wb(unsigned long phys, unsigned long size);
>>
>> +#define ARCH_HAS_NEEDS_SWAP_PTL 1
>> +static inline bool arch_needs_swap_ptl(void)
>> +{
>> + return boot_cpu_has_bug(X86_BUG_PTE_LEAK);
>> +}
>> +
>> #endif /* !__ASSEMBLY__ */
> I think we need a comment on this sucker. I'm not sure we should lean
> solely on the commit message to record why we need this until the end of
> time.
OK.
>> + if (c->x86_model == 87) {
> Please use the macros in here for the model id:
OK.
> http://git.kernel.org/cgit/linux/kernel/git/tip/tip.git/tree/arch/x86/include/asm/intel-family.h
> We also probably want to prefix the pr_info() with something like
> "x86/intel:".
OK
>> +/*
>> + * Workaround for KNL issue:
> Please be specific about what this "KNL issue" *is*.
OK
>> + * A thread that is going to page fault due to P=0, may still
>> + * non atomically set A or D bits, which could corrupt swap entries.
>> + * Always flush the other CPUs and clear the PTE again to avoid
>> + * this leakage. We are excluded using the pagetable lock.
>> + */
>> +
>> +void fix_pte_leak(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
>> +{
>> + if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids) {
>> + trace_tlb_flush(TLB_LOCAL_SHOOTDOWN, TLB_FLUSH_ALL);
>> + flush_tlb_others(mm_cpumask(mm), mm, addr,
>> + addr + PAGE_SIZE);
>> + mb();
>> + set_pte(ptep, __pte(0));
>> + }
>> +}
>
> I think the comment here is a bit sparse. Can we add some more details,
> like:
>
> Entering here, the current CPU just cleared the PTE. But,
> another thread may have raced and set the A or D bits, or be
> _about_ to set the bits. Shooting their TLB entry down will
> ensure they see the cleared PTE and will not set A or D.
>
> and by the set_pte():
>
> Clear the PTE one more time, in case the other thread set A/D
> before we sent the TLB flush.
Thanks,
Lukasz