Re: [PATCH] mm: gup: fix the fast GUP race against THP collapse

From: Aneesh Kumar K.V
Date: Mon Sep 05 2022 - 04:56:21 EST


Yang Shi <shy828301@xxxxxxxxx> writes:

>
> On Fri, Sep 2, 2022 at 9:00 AM Peter Xu <peterx@xxxxxxxxxx> wrote:
>>
>> On Thu, Sep 01, 2022 at 04:50:45PM -0700, Yang Shi wrote:
>> > On Thu, Sep 1, 2022 at 4:26 PM Peter Xu <peterx@xxxxxxxxxx> wrote:
>> > >
>> > > Hi, Yang,
>> > >
>> > > On Thu, Sep 01, 2022 at 03:27:07PM -0700, Yang Shi wrote:
>> > > > Since general RCU GUP fast was introduced in commit 2667f50e8b81 ("mm:
>> > > > introduce a general RCU get_user_pages_fast()"), a TLB flush is no longer
>> > > > sufficient to handle concurrent GUP-fast in all cases, it only handles
>> > > > traditional IPI-based GUP-fast correctly.
>> > >
>> > > If TLB flush (or, IPI broadcasts) used to work to protect against gup-fast,
>> > > I'm kind of confused why it's not sufficient even if with RCU gup? Isn't
>> > > that'll keep working as long as interrupt disabled (which current fast-gup
>> > > will still do)?
>> >
>> > Actually the wording was copied from David's commit log for his
>> > PageAnonExclusive fix. My understanding is the IPI broadcast still
>> > works, but it may not be supported by all architectures and not
>> > preferred anymore. So we should avoid depending on IPI broadcast IIUC.
>> >
>> > >
>> > > IIUC the issue is you suspect not all archs correctly implemented
>> > > pmdp_collapse_flush(), or am I wrong?
>> >
>> > This is a possible fix, please see below for details.
>> >
>> > >
>> > > > On architectures that send
>> > > > an IPI broadcast on TLB flush, it works as expected. But on the
>> > > > architectures that do not use IPI to broadcast TLB flush, it may have
>> > > > the below race:
>> > > >
>> > > > CPU A CPU B
>> > > > THP collapse fast GUP
>> > > > gup_pmd_range() <-- see valid pmd
>> > > > gup_pte_range() <-- work on pte
>> > > > pmdp_collapse_flush() <-- clear pmd and flush
>> > > > __collapse_huge_page_isolate()
>> > > > check page pinned <-- before GUP bump refcount
>> > > > pin the page
>> > > > check PTE <-- no change
>> > > > __collapse_huge_page_copy()
>> > > > copy data to huge page
>> > > > ptep_clear()
>> > > > install huge pmd for the huge page
>> > > > return the stale page
>> > > > discard the stale page
>> > > >
>> > > > The race could be fixed by checking whether PMD is changed or not after
>> > > > taking the page pin in fast GUP, just like what it does for PTE. If the
>> > > > PMD is changed it means there may be parallel THP collapse, so GUP
>> > > > should back off.
>> > >
>> > > Could the race also be fixed by impl pmdp_collapse_flush() correctly for
>> > > the archs that are missing? Do you know which arch(s) is broken with it?
>> >
>> > Yes, and this was suggested by me in the first place, but per the
>> > suggestion from John and David, this is not the preferred way. I think
>> > it is because:
>> >
>> > Firstly, using IPI to serialize against fast GUP is not recommended
>> > anymore since fast GUP does check PTE then back off so we should avoid
>> > it.
>> > Secondly, if checking PMD then backing off could solve the problem,
>> > why do we still need broadcast IPI? It doesn't sound performant.
>> >
>> > >
>> > > It's just not clear to me whether this patch is an optimization or a fix,
>> > > if it's a fix whether the IPI broadcast in ppc pmdp_collapse_flush() would
>> > > still be needed.
>> >
>> > It is a fix and the fix will make IPI broadcast not useful anymore.
>>
>> How about another patch to remove the ppc impl too? Then it can be a two
>> patches series.
>
> BTW, I don't think we could remove the ppc implementation since it is
> different from the generic pmdp_collapse_flush(), particularly for the
> hash part IIUC.
>
> The generic version calls flush_tlb_range() -> hash__flush_tlb_range()
> for hash, but the hash call is actually no-op. The ppc version calls
> hash__pmdp_collapse_flush() -> flush_tlb_pmd_range(), which does
> something useful.
>

We should actually rename flush_tlb_pmd_range(). It actually flush the
hash page table entries.

I will do the below patch for ppc64 to clarify this better

diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
index 8b762f282190..fd30fa20c392 100644
--- a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
+++ b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
@@ -112,13 +112,12 @@ static inline void hash__flush_tlb_kernel_range(unsigned long start,

struct mmu_gather;
extern void hash__tlb_flush(struct mmu_gather *tlb);
-void flush_tlb_pmd_range(struct mm_struct *mm, pmd_t *pmd, unsigned long addr);

#ifdef CONFIG_PPC_64S_HASH_MMU
/* Private function for use by PCI IO mapping code */
extern void __flush_hash_table_range(unsigned long start, unsigned long end);
-extern void flush_tlb_pmd_range(struct mm_struct *mm, pmd_t *pmd,
- unsigned long addr);
+extern void flush_hash_table_pmd_range(struct mm_struct *mm, pmd_t *pmd,
+ unsigned long addr);
#else
static inline void __flush_hash_table_range(unsigned long start, unsigned long end) { }
#endif
diff --git a/arch/powerpc/mm/book3s64/hash_pgtable.c b/arch/powerpc/mm/book3s64/hash_pgtable.c
index ae008b9df0e6..f30131933a01 100644
--- a/arch/powerpc/mm/book3s64/hash_pgtable.c
+++ b/arch/powerpc/mm/book3s64/hash_pgtable.c
@@ -256,7 +256,7 @@ pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addres
* the __collapse_huge_page_copy can result in copying
* the old content.
*/
- flush_tlb_pmd_range(vma->vm_mm, &pmd, address);
+ flush_hash_table_pmd_range(vma->vm_mm, &pmd, address);
return pmd;
}

diff --git a/arch/powerpc/mm/book3s64/hash_tlb.c b/arch/powerpc/mm/book3s64/hash_tlb.c
index eb0bccaf221e..a64ea0a7ef96 100644
--- a/arch/powerpc/mm/book3s64/hash_tlb.c
+++ b/arch/powerpc/mm/book3s64/hash_tlb.c
@@ -221,7 +221,7 @@ void __flush_hash_table_range(unsigned long start, unsigned long end)
local_irq_restore(flags);
}

-void flush_tlb_pmd_range(struct mm_struct *mm, pmd_t *pmd, unsigned long addr)
+void flush_hash_table_pmd_range(struct mm_struct *mm, pmd_t *pmd, unsigned long addr)
{
pte_t *pte;
pte_t *start_pte;