Re: [PATCH v2 2/3] mm/huge_memory: Prevent huge zeropage refcount corruption in PMD move
From: David Hildenbrand (Arm)
Date: Thu Feb 26 2026 - 10:47:48 EST
On 2/26/26 15:16, Chris Down wrote:
> After commit d82d09e48219 ("mm/huge_memory: mark PMD mappings of the
> huge zero folio special"), moved huge zero PMDs must remain special so
> vm_normal_page_pmd() continues to treat them as special mappings.
>
> move_pages_huge_pmd() currently reconstructs the destination PMD in the
> huge zero page branch, which drops PMD state such as pmd_special() on
> architectures with CONFIG_ARCH_HAS_PTE_SPECIAL. As a result,
> vm_normal_page_pmd() can treat the moved huge zero PMD as a normal page
> and corrupt its refcount.
>
> Instead of reconstructing the PMD from the folio, derive the destination
> entry from src_pmdval after pmdp_huge_clear_flush(), then handle the PMD
> metadata the same way move_huge_pmd() does for moved entries by marking
> it soft-dirty and clearing uffd-wp.
>
> Fixes: d82d09e48219 ("mm/huge_memory: mark PMD mappings of the huge zero folio special")
> Cc: stable@xxxxxxxxxxxxxxx
> Signed-off-by: Chris Down <chris@xxxxxxxxxxxxxx>
> ---
> mm/huge_memory.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index fed57951a7cd..8166b5e871ad 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2794,7 +2794,8 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm
> _dst_pmd = pmd_mkwrite(pmd_mkdirty(_dst_pmd), dst_vma);
> } else {
> src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
> - _dst_pmd = folio_mk_pmd(page_folio(src_page), dst_vma->vm_page_prot);
> + _dst_pmd = move_soft_dirty_pmd(src_pmdval);
> + _dst_pmd = clear_uffd_wp_pmd(_dst_pmd);
Please squash that patch directly in #1.
It doesn't make sense to leave something partially fixed in #1. It's
been completely broken from the start. folio_mk_pmd() should never have
been used.
Apart from that, the end results LGTM, thanks
--
Cheers,
David