Re: [PATCH RESEND v3 0/4] mm/hugetlb: fixes for PMD table sharing (incl. using mmu_gather)
From: Laurence Oberman
Date: Tue Dec 23 2025 - 18:23:36 EST
On Tue, 2025-12-23 at 22:40 +0100, David Hildenbrand (Red Hat) wrote:
> One functional fix, one performance regression fix, and two related
> comment fixes.
>
> I cleaned up my prototype I recently shared [1] for the performance
> fix,
> deferring most of the cleanups I had in the prototype to a later
> point.
> While doing that I identified the other things.
>
> The goal of this patch set is to be backported to stable trees
> "fairly"
> easily. At least patch #1 and #4.
>
> Patch #1 fixes hugetlb_pmd_shared() not detecting any sharing
> Patch #2 + #3 are simple comment fixes that patch #4 interacts with.
> Patch #4 is a fix for the reported performance regression due to
> excessive
> IPI broadcasts during fork()+exit().
>
> The last patch is all about TLB flushes, IPIs and mmu_gather.
> Read: complicated
>
> I added as much comments + description that I possibly could, and I
> am
> hoping for review from Jann.
>
> There are plenty of cleanups in the future to be had + one reasonable
> optimization on x86. But that's all out of scope for this series.
>
> Compile tested on plenty of architectures.
>
> Runtime tested, with a focus on fixing the performance regression
> using
> the original reproducer [2] on x86.
>
> [1]
> https://lore.kernel.org/all/8cab934d-4a56-44aa-b641-bfd7e23bd673@xxxxxxxxxx/
> [2]
> https://lore.kernel.org/all/8cab934d-4a56-44aa-b641-bfd7e23bd673@xxxxxxxxxx/
>
> --
>
> v2 -> v3:
> * Rebased to 6.19-rc2 and retested on x86
> * Changes on last patch:
> * Introduce and use tlb_gather_mmu_vma() for properly setting up
> mmu_gather
> for hugetlb -- thanks to Harry for pointing me once again at the
> nasty
> hugetlb integration in mmu_gather
> * Move tlb_remove_huge_tlb_entry() after move_huge_pte()
> * For consistency, always call tlb_gather_mmu_vma() after
> flush_cache_range()
> * Don't pass mmu_gather to hugetlb_change_protection(), simply use
> a local one for now. (avoids messing with tlb_start_vma() /
> tlb_start_end())
> * Dropped Lorenzo's RB due to the changes
>
> v1 -> v2:
> * Picked RB's/ACK's, hopefully I didn't miss any
> * Added the initialization of fully_unshared_tables in
> __tlb_gather_mmu()
> (Thanks Nadav!)
> * Refined some comments based on Lorenzo's feedback.
>
> Cc: Will Deacon <will@xxxxxxxxxx>
> Cc: "Aneesh Kumar K.V" <aneesh.kumar@xxxxxxxxxx>
> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> Cc: Nick Piggin <npiggin@xxxxxxxxx>
> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
> Cc: Arnd Bergmann <arnd@xxxxxxxx>
> Cc: Muchun Song <muchun.song@xxxxxxxxx>
> Cc: Oscar Salvador <osalvador@xxxxxxx>
> Cc: "Liam R. Howlett" <Liam.Howlett@xxxxxxxxxx>
> Cc: Lorenzo Stoakes <lorenzo.stoakes@xxxxxxxxxx>
> Cc: Vlastimil Babka <vbabka@xxxxxxx>
> Cc: Jann Horn <jannh@xxxxxxxxxx>
> Cc: Pedro Falcato <pfalcato@xxxxxxx>
> Cc: Rik van Riel <riel@xxxxxxxxxxx>
> Cc: Harry Yoo <harry.yoo@xxxxxxxxxx>
> Cc: Uschakow, Stanislav" <suschako@xxxxxxxxx>
> Cc: Laurence Oberman <loberman@xxxxxxxxxx>
> Cc: Prakash Sangappa <prakash.sangappa@xxxxxxxxxx>
> Cc: Nadav Amit <nadav.amit@xxxxxxxxx>
>
> David Hildenbrand (Red Hat) (4):
> mm/hugetlb: fix hugetlb_pmd_shared()
> mm/hugetlb: fix two comments related to huge_pmd_unshare()
> mm/rmap: fix two comments related to huge_pmd_unshare()
> mm/hugetlb: fix excessive IPI broadcasts when unsharing PMD tables
> using mmu_gather
>
> include/asm-generic/tlb.h | 77 +++++++++++++++++++++-
> include/linux/hugetlb.h | 17 +++--
> include/linux/mm_types.h | 1 +
> mm/hugetlb.c | 131 +++++++++++++++++++++---------------
> --
> mm/mmu_gather.c | 33 ++++++++++
> mm/rmap.c | 45 ++++++-------
> 6 files changed, 213 insertions(+), 91 deletions(-)
>
>
> base-commit: b927546677c876e26eba308550207c2ddf812a43
Hello David
For the V3 series, I re-ran the tests and the original reproducer and
its clean. I see the same almost 6x improvement for the original
reproducer
# uname -r
6.19.0-rc2-hugetlbv3+
Un-patched Result of reproducer Iteration completed in 3436 ms
V3 Patched Result of reproducer Iteration completed in 639 ms
I also ran a test to map every hugepage I could access (460GB of them)
then fill and validate and had no issues.
Tested-by: Laurence Oberman <loberman@xxxxxxxxxx>