Re: [PATCH mm-unstable v1 1/5] mm: consolidate anonymous folio PTE mapping into helpers
From: Nico Pache
Date: Thu Feb 12 2026 - 14:45:43 EST
On Thu, Feb 12, 2026 at 9:09 AM Zi Yan <ziy@xxxxxxxxxx> wrote:
>
> On 11 Feb 2026, at 21:18, Nico Pache wrote:
>
> > The anonymous page fault handler in do_anonymous_page() open-codes the
> > sequence to map a newly allocated anonymous folio at the PTE level:
> > - construct the PTE entry
> > - add rmap
> > - add to LRU
> > - set the PTEs
> > - update the MMU cache.
> >
> > Introduce a two helpers to consolidate this duplicated logic, mirroring the
> > existing map_anon_folio_pmd_nopf() pattern for PMD-level mappings:
> >
> > map_anon_folio_pte_nopf(): constructs the PTE entry, takes folio
> > references, adds anon rmap and LRU. This function also handles the
> > uffd_wp that can occur in the pf variant.
> >
> > map_anon_folio_pte_pf(): extends the nopf variant to handle MM_ANONPAGES
> > counter updates, and mTHP fault allocation statistics for the page fault
> > path.
> >
> > The zero-page read path in do_anonymous_page() is also untangled from the
> > shared setpte label, since it does not allocate a folio and should not
> > share the same mapping sequence as the write path. Make nr_pages = 1
> > rather than relying on the variable. This makes it more clear that we
> > are operating on the zero page only.
> >
> > This refactoring will also help reduce code duplication between mm/memory.c
> > and mm/khugepaged.c, and provides a clean API for PTE-level anonymous folio
> > mapping that can be reused by future callers.
> >
> > Signed-off-by: Nico Pache <npache@xxxxxxxxxx>
> > ---
> > include/linux/mm.h | 4 ++++
> > mm/memory.c | 56 ++++++++++++++++++++++++++++++----------------
> > 2 files changed, 41 insertions(+), 19 deletions(-)
> >
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index f8a8fd47399c..c3aa1f51e020 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -4916,4 +4916,8 @@ static inline bool snapshot_page_is_faithful(const struct page_snapshot *ps)
> >
> > void snapshot_page(struct page_snapshot *ps, const struct page *page);
> >
> > +void map_anon_folio_pte_nopf(struct folio *folio, pte_t *pte,
> > + struct vm_area_struct *vma, unsigned long addr,
> > + bool uffd_wp);
> > +
> > #endif /* _LINUX_MM_H */
> > diff --git a/mm/memory.c b/mm/memory.c
> > index 8c19af97f0a0..61c2277c9d9f 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -5211,6 +5211,35 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf)
> > return folio_prealloc(vma->vm_mm, vma, vmf->address, true);
> > }
> >
> > +
> > +void map_anon_folio_pte_nopf(struct folio *folio, pte_t *pte,
> > + struct vm_area_struct *vma, unsigned long addr,
> > + bool uffd_wp)
> > +{
> > + pte_t entry = folio_mk_pte(folio, vma->vm_page_prot);
> > + unsigned int nr_pages = folio_nr_pages(folio);
> > +
> > + entry = maybe_mkwrite(pte_mkdirty(entry), vma);
> > + if (uffd_wp)
> > + entry = pte_mkuffd_wp(entry);
> > +
> > + folio_ref_add(folio, nr_pages - 1);
> > + folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE);
> > + folio_add_lru_vma(folio, vma);
> > + set_ptes(vma->vm_mm, addr, pte, entry, nr_pages);
> > + update_mmu_cache_range(NULL, vma, addr, pte, nr_pages);
>
> Copy the comment
> /* No need to invalidate - it was non-present before */
> above it please.
Good call thank you!
>
> > +}
> > +
> > +static void map_anon_folio_pte_pf(struct folio *folio, pte_t *pte,
> > + struct vm_area_struct *vma, unsigned long addr,
> > + unsigned int nr_pages, bool uffd_wp)
> > +{
> > + map_anon_folio_pte_nopf(folio, pte, vma, addr, uffd_wp);
> > + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages);
> > + count_mthp_stat(folio_order(folio), MTHP_STAT_ANON_FAULT_ALLOC);
> > +}
> > +
> > +
> > /*
> > * We enter with non-exclusive mmap_lock (to exclude vma changes,
> > * but allow concurrent faults), and pte mapped but not yet locked.
> > @@ -5257,7 +5286,13 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
> > pte_unmap_unlock(vmf->pte, vmf->ptl);
> > return handle_userfault(vmf, VM_UFFD_MISSING);
> > }
> > - goto setpte;
> > + if (vmf_orig_pte_uffd_wp(vmf))
> > + entry = pte_mkuffd_wp(entry);
> > + set_pte_at(vma->vm_mm, addr, vmf->pte, entry);
>
> entry is only used in this if statement, you can move its declaration inside.
Ack!
>
> > +
> > + /* No need to invalidate - it was non-present before */
> > + update_mmu_cache_range(vmf, vma, addr, vmf->pte, /*nr_pages=*/ 1);
> > + goto unlock;
> > }
> >
> > /* Allocate our own private page. */
> > @@ -5281,11 +5316,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
> > */
> > __folio_mark_uptodate(folio);
> >
> > - entry = folio_mk_pte(folio, vma->vm_page_prot);
> > - entry = pte_sw_mkyoung(entry);
>
> It is removed, can you explain why?
Thanks for catching that (as others have too), I will add it back and
run my testing again to make sure everything is still ok. As Joshua
pointed out it may only affect MIPS, hence no issues in my testing.
>
> > - if (vma->vm_flags & VM_WRITE)
> > - entry = pte_mkwrite(pte_mkdirty(entry), vma);
>
> OK, this becomes maybe_mkwrite(pte_mkdirty(entry), vma).
Yes, upon further investigation this does seem to slightly change the behavior.
pte_mkdirty() is now being called unconditionally from the VM_WRITE
flag. I noticed other callers in the kernel doing this too.
Is it ok to leave the pte_mkdirty() or should I go back to using
pte_mkwrite with the conditional guarding both mkwrite and mkdirty?
>
> > -
>
> The above code is moved into map_anon_folio_pte_nopf(), thus executed
> later than before the change. folio, vma->vm_flags, and vma->vm_page_prot
> are not changed between, so there should be no functional change.
> But it is better to explain it in the commit message to make review easier.
Will do! Thank you for confirming :) I am pretty sure we can make this
move without any functional change.
>
> > vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, addr, &vmf->ptl);
> > if (!vmf->pte)
> > goto release;
> > @@ -5307,19 +5337,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
> > folio_put(folio);
> > return handle_userfault(vmf, VM_UFFD_MISSING);
> > }
> > -
> > - folio_ref_add(folio, nr_pages - 1);
>
> > - add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages);
> > - count_mthp_stat(folio_order(folio), MTHP_STAT_ANON_FAULT_ALLOC);
>
> These counter updates are moved after folio_add_new_anon_rmap(),
> mirroring map_anon_folio_pmd_pf()’s order. Looks good to me.
>
> > - folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE);
> > - folio_add_lru_vma(folio, vma);
> > -setpte:
>
> > - if (vmf_orig_pte_uffd_wp(vmf))
> > - entry = pte_mkuffd_wp(entry);
>
> This is moved above folio_ref_add() in map_anon_folio_pte_nopf(), but
> no functional change is expected.
>
> > - set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr_pages);
> > -
> > - /* No need to invalidate - it was non-present before */
> > - update_mmu_cache_range(vmf, vma, addr, vmf->pte, nr_pages);
> > + map_anon_folio_pte_pf(folio, vmf->pte, vma, addr, nr_pages, vmf_orig_pte_uffd_wp(vmf));
> > unlock:
> > if (vmf->pte)
> > pte_unmap_unlock(vmf->pte, vmf->ptl);
> > --
> > 2.53.0
>
> 3 things:
> 1. Copy the comment for update_mmu_cache_range() in map_anon_folio_pte_nopf().
> 2. Make pte_t entry local in zero-page handling.
> 3. Explain why entry = pte_sw_mkyoung(entry) is removed.
>
> Thanks.
Thanks for the review :) Ill fix the issues stated above!
-- Nico
>
>
> Best Regards,
> Yan, Zi
>