Re: [PATCH v3] mm/hugetlb: fix hugetlb vs. core-mm PT locking
From: Michael Ellerman
Date: Wed Jul 31 2024 - 22:23:03 EST
David Hildenbrand <david@xxxxxxxxxx> writes:
> On 31.07.24 16:54, Peter Xu wrote:
...
>>
>> The other nitpick is, I didn't yet find any arch that use non-zero order
>> page for pte pgtables. I would give it a shot with dropping the mask thing
>> then see what explodes (which I don't expect any, per my read..), but yeah
>> I understand we saw some already due to other things, so I think it's fine
>> in this hugetlb path (that we're removing) we do a few more math if you
>> think that's easier for you.
>
> I threw
> BUILD_BUG_ON(PTRS_PER_PTE * sizeof(pte_t) > PAGE_SIZE);
> into pte_lockptr() and did a bunch of cross-compiles.
>
> And for some reason it blows up for powernv (powernv_defconfig) and
> pseries (pseries_defconfig).
>
>
> In function 'pte_lockptr',
> inlined from 'pte_offset_map_nolock' at mm/pgtable-generic.c:316:11:
> ././include/linux/compiler_types.h:510:45: error: call to '__compiletime_assert_291' declared with attribute error: BUILD_BUG_ON failed: PTRS_PER_PTE * sizeof(pte_t) > PAGE_SIZE
> 510 | _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
> | ^
> ././include/linux/compiler_types.h:491:25: note: in definition of macro '__compiletime_assert'
> 491 | prefix ## suffix(); \
> | ^~~~~~
> ././include/linux/compiler_types.h:510:9: note: in expansion of macro '_compiletime_assert'
> 510 | _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
> | ^~~~~~~~~~~~~~~~~~~
> ./include/linux/build_bug.h:39:37: note: in expansion of macro 'compiletime_assert'
> 39 | #define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
> | ^~~~~~~~~~~~~~~~~~
> ./include/linux/build_bug.h:50:9: note: in expansion of macro 'BUILD_BUG_ON_MSG'
> 50 | BUILD_BUG_ON_MSG(condition, "BUILD_BUG_ON failed: " #condition)
> | ^~~~~~~~~~~~~~~~
> ./include/linux/mm.h:2926:9: note: in expansion of macro 'BUILD_BUG_ON'
> 2926 | BUILD_BUG_ON(PTRS_PER_PTE * sizeof(pte_t) > PAGE_SIZE);
> | ^~~~~~~~~~~~
...
>
> pte_alloc_one() ends up calling pte_fragment_alloc(mm, 0). But there we always
> end up calling pagetable_alloc(, 0).
>
> And fragments are supposed to be <= a single page.
>
> Now I'm confused what's wrong here ... am I missing something obvious?
>
> CCing some powerpc folks. Is this some pte_t oddity?
It will be because PTRS_PER_PTE is not a compile time constant :(
$ git grep "define PTRS_PER_PTE" arch/powerpc/include/asm/book3s/64
arch/powerpc/include/asm/book3s/64/pgtable.h:#define PTRS_PER_PTE (1 << PTE_INDEX_SIZE)
$ git grep "define PTE_INDEX_SIZE" arch/powerpc/include/asm/book3s/64
arch/powerpc/include/asm/book3s/64/pgtable.h:#define PTE_INDEX_SIZE __pte_index_size
$ git grep __pte_index_size arch/powerpc/mm/pgtable_64.c
arch/powerpc/mm/pgtable_64.c:unsigned long __pte_index_size;
Which is because the pseries/powernv (book3s64) kernel supports either
the HPT or Radix MMU at runtime, and they have different page table
geometry.
If you change it to use MAX_PTRS_PER_PTE it should work (that's defined
for all arches).
cheers
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 381750f41767..1fd9c296c0b6 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2924,6 +2924,8 @@ static inline spinlock_t *ptlock_ptr(struct ptdesc *ptdesc)
static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pte_t *pte)
{
/* PTE page tables don't currently exceed a single page. */
+ BUILD_BUG_ON(MAX_PTRS_PER_PTE * sizeof(pte_t) > PAGE_SIZE);
+
return ptlock_ptr(virt_to_ptdesc(pte));
}