[PATCH v1 0/2] mm/hugetlb: fix hugetlb vs. core-mm PT locking

From: David Hildenbrand
Date: Thu Jul 25 2024 - 14:40:32 EST


Working on another generic page table walker that tries to avoid
special-casing hugetlb, I found a page table locking issue with hugetlb
folios that are not mapped using a single PMD/PUD.

For some hugetlb folio sizes, GUP will take different page table locks
when walking the page tables than hugetlb when modifying the page tables.

I did not actually try reproducing an issue, but looking at
follow_pmd_mask() where we might be rereading a PMD value multiple times
it's rather clear that concurrent modifications are rather unpleasant.

In follow_page_pte() we might be better in that regard -- ptep_get() does
a READ_ONCE() -- but who knows what else could happen concurrently in
some weird corner cases (e.g., hugetlb folio getting unmapped and freed).

Did some basic sanity testing with various hugetlb sizes on x86-64 and
arm64. Maybe I'll find some time to actually write a simple reproducer in
the common weeks, so this wouldn't have to be all-theoretical for now.

Only v6.10 is affected, so the #1 can be simply backported as a prereq
patch along with the real fix.

Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Cc: Muchun Song <muchun.song@xxxxxxxxx>
Cc: Peter Xu <peterx@xxxxxxxxxx>
Cc: Oscar Salvador <osalvador@xxxxxxx>

David Hildenbrand (2):
mm: let pte_lockptr() consume a pte_t pointer
mm/hugetlb: fix hugetlb vs. core-mm PT locking

include/linux/hugetlb.h | 25 ++++++++++++++++++++++---
include/linux/mm.h | 7 ++++---
mm/khugepaged.c | 21 +++++++++++++++------
mm/pgtable-generic.c | 4 ++--
4 files changed, 43 insertions(+), 14 deletions(-)


base-commit: cca1345bd26a67fc61a92ff0c6d81766c259e522
--
2.45.2