Re: [PATCH v2] mm/userfaultfd: fix hugetlb fault mutex hash calculation
From: David Hildenbrand (Arm)
Date: Mon Mar 09 2026 - 12:54:20 EST
On 3/7/26 15:35, Jianhui Zhou wrote:
> In mfill_atomic_hugetlb(), linear_page_index() is used to calculate the
> page index for hugetlb_fault_mutex_hash(). However, linear_page_index()
> returns the index in PAGE_SIZE units, while hugetlb_fault_mutex_hash()
> expects the index in huge page units (as calculated by
> vma_hugecache_offset()). This mismatch means that different addresses
> within the same huge page can produce different hash values, leading to
> the use of different mutexes for the same huge page. This can cause
> races between faulting threads, which can corrupt the reservation map
> and trigger the BUG_ON in resv_map_release().
>
> Fix this by replacing linear_page_index() with vma_hugecache_offset()
> and applying huge_page_mask() to align the address properly. To make
> vma_hugecache_offset() available outside of mm/hugetlb.c, move it to
> include/linux/hugetlb.h as a static inline function.
>
> Fixes: 60d4d2d2b40e ("userfaultfd: hugetlbfs: add __mcopy_atomic_hugetlb for huge page UFFDIO_COPY")
> Reported-by: syzbot+f525fd79634858f478e7@xxxxxxxxxxxxxxxxxxxxxxxxx
> Closes: https://syzkaller.appspot.com/bug?extid=f525fd79634858f478e7
> Cc: stable@xxxxxxxxxxxxxxx
> Signed-off-by: Jianhui Zhou <jianhuizzzzz@xxxxxxxxx>
> ---
> v2:
> - Remove unnecessary !CONFIG_HUGETLB_PAGE stub for vma_hugecache_offset()
> (Peter Xu, SeongJae Park)
>
> include/linux/hugetlb.h | 11 +++++++++++
> mm/hugetlb.c | 11 -----------
> mm/userfaultfd.c | 5 ++++-
> 3 files changed, 15 insertions(+), 12 deletions(-)
>
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index 65910437be1c..f003afe0cc91 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -796,6 +796,17 @@ static inline unsigned huge_page_shift(struct hstate *h)
> return h->order + PAGE_SHIFT;
> }
>
> +/*
> + * Convert the address within this vma to the page offset within
> + * the mapping, huge page units here.
> + */
> +static inline pgoff_t vma_hugecache_offset(struct hstate *h,
> + struct vm_area_struct *vma, unsigned long address)
> +{
> + return ((address - vma->vm_start) >> huge_page_shift(h)) +
> + (vma->vm_pgoff >> huge_page_order(h));
> +}
It's hard to put my disgust about the terminology "hugecache" into
words. Not your fault, but we should do better :)
If you're starting to use that from other MM code then hugetlb.c, please
find a better name.
Further, I wonder whether we can avoid passing in "struct hstate *h" and
simply call hstate_vma() internally.
Something like the following to mimic linear_page_index() ?
/**
* hugetlb_linear_page_index - linear_page_index() but in hugetlb page
* size granularity
* @vma: ...
* @address: ...
*
* Returns: ...
*/
static inline void hugetlb_linear_page_index(struct vm_area_struct *vma,
unsigned long address)
{
struct hstate *h = hstate_vma(vma);
...
}
--
Cheers,
David