Re: [REGRESSION] mm/mprotect: 2x+ slowdown for >=400KiB regions since PTE batching (cac1db8c3aad)
From: David Hildenbrand (Arm)
Date: Thu Feb 19 2026 - 10:32:07 EST
On 2/19/26 16:00, Pedro Falcato wrote:
On Thu, Feb 19, 2026 at 02:02:42PM +0100, David Hildenbrand (Arm) wrote:
On 2/19/26 13:15, Pedro Falcato wrote:
I don't know, perhaps there isn't a will-it-scale test for this. That's
alright. Even the standard will-it-scale and stress-ng tests people use
to detect regressions usually have glaring problems and are insanely
microbenchey.
My theory is that most heavy (high frequency where it would really hit performance)
mprotect users (like JITs) perform mprotect on very small ranges (e.g., single page),
where all the other overhead (syscall, TLB flush) dominates.
That's why I was wondering which use cases that behave similar to the reproducer exist.
Sure, but pte-mapped 2M folios is almost a worst-case (why not a PMD at that
point...)
Well, 1M and all the way down will similarly benefit. 2M is just always the extreme case.
I suspect it's not that huge of a deal. Worst case you can always provide a
software PTE_CONT bit that would e.g be set when mapping a large folio. Or
perhaps "if this pte has a PFN, and the next pte has PFN + 1, then we're
probably in a large folio, thus do the proper batching stuff". I think that
could satisfy everyone. There are heuristics we can use, and perhaps
pte_batch_hint() does not need to be that simple and useless in the !arm64
case then. I'll try to look into a cromulent solution for everyone.
Software bits are generally -ENOSPC, but maybe we are lucky on some architectures.
We'd run into similar issues like aarch64 when shattering contiguity etc, so
there is quite some complexity too it that might not be worth it.
(shower thought: do we always get wins when batching large folios, or do these
need to be of a significant order to get wins?)
For mprotect(), I don't know. For fork() and unmap() batching there was always a
win even with order-2 folios. (never measured order-1, because they don't apply to
anonymous memory)
I assume for mprotect() it depends whether we really needed the folio before, or
whether it's just not required like for mremap().
But personally I would err on the side of small folios, like we did for mremap()
a few months back.
The following (completely untested) might make most people happy by looking up
the folio only if (a) required or (b) if the architecture indicates that there is a large folio.
I assume for some large folio use cases it might perform worse than before. But for
the write-upgrade case with large anon folios the performance improvement should remain.
Not sure if some regression would remain for which we'd have to special-case the implementation
to take a separate path for nr_ptes == 1.
Maybe you had something similar already:
diff --git a/mm/mprotect.c b/mm/mprotect.c
index c0571445bef7..0b3856ad728e 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -211,6 +211,25 @@ static void set_write_prot_commit_flush_ptes(struct vm_area_struct *vma,
commit_anon_folio_batch(vma, folio, page, addr, ptep, oldpte, ptent, nr_ptes, tlb);
}
+static bool mprotect_wants_folio_for_pte(unsigned long cp_flags, pte_t *ptep,
+ pte_t pte, unsigned long max_nr_ptes)
+{
+ /* NUMA hinting needs decide whether working on the folio is ok. */
+ if (cp_flags & MM_CP_PROT_NUMA)
+ return true;
+
+ /* We want the folio for possible write-upgrade. */
+ if (!pte_write(pte) && (cp_flags & MM_CP_TRY_CHANGE_WRITABLE))
+ return true;
+
+ /* There is nothing to batch. */
+ if (max_nr_ptes == 1)
+ return false;
+
+ /* For guaranteed large folios it's usually a win. */
+ return pte_batch_hint(ptep, pte) > 1;
+}
+
static long change_pte_range(struct mmu_gather *tlb,
struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr,
unsigned long end, pgprot_t newprot, unsigned long cp_flags)
@@ -241,16 +260,18 @@ static long change_pte_range(struct mmu_gather *tlb,
const fpb_t flags = FPB_RESPECT_SOFT_DIRTY | FPB_RESPECT_WRITE;
int max_nr_ptes = (end - addr) >> PAGE_SHIFT;
struct folio *folio = NULL;
- struct page *page;
+ struct page *page = NULL;
pte_t ptent;
/* Already in the desired state. */
if (prot_numa && pte_protnone(oldpte))
continue;
- page = vm_normal_page(vma, addr, oldpte);
- if (page)
- folio = page_folio(page);
+ if (mprotect_wants_folio_for_pte(cp_flags, pte, oldpte, max_nr_ptes)) {
+ page = vm_normal_page(vma, addr, oldpte);
+ if (page)
+ folio = page_folio(page);
+ }
/*
* Avoid trapping faults against the zero or KSM
Yes, this is a better version than what I had, I'll take this hunk if you don't mind :)
Not at all, thanks for working on this.
Note that it still doesn't handle large folios on !contpte architectures, which
is partly the issue.
It should when we really need the folio (write-upgrade, NUMA faults). So I guess the benchmark with THP will still show the benefit (as it does the write upgrade).
I suspect some sort of PTE lookahead might work well in
practice, aside from the issues where e.g two order-0 folios that are
contiguous in memory are separately mapped.
Though perhaps inlining vm_normal_folio() might also be interesting and side-step
most of the issue. I'll play around with that.
I'd assume that it could also help fork/munmap() etc. For common architectures with vmemmap, vm_normal_page() is extremely short code.
--
Cheers,
David