Re: [REGRESSION] mm/mprotect: 2x+ slowdown for >=400KiB regions since PTE batching (cac1db8c3aad)

From: Dev Jain

Date: Thu Feb 19 2026 - 23:18:03 EST



On 19/02/26 8:30 pm, Pedro Falcato wrote:
> On Thu, Feb 19, 2026 at 02:02:42PM +0100, David Hildenbrand (Arm) wrote:
>> On 2/19/26 13:15, Pedro Falcato wrote:
>>> On Wed, Feb 18, 2026 at 01:24:28PM +0100, David Hildenbrand (Arm) wrote:
>>>> On 2/18/26 12:58, Pedro Falcato wrote:
>>>>> I don't understand what you're looking for. an mprotect-based workload? those
>>>>> obviously don't really exist, apart from something like a JIT engine cranking
>>>>> out a lot of mprotect() calls in an aggressive fashion. Or perhaps some of that
>>>>> usage of mprotect that our DB friends like to use sometimes (discussed in
>>>>> $OTHER_CONTEXTS), though those are generally hugepages.
>>>>>
>>>> Anything besides a homemade micro-benchmark that highlights why we should
>>>> care about this exact fast and repeated sequence of events.
>>>>
>>>> I'm surprise that such a "large regression" does not show up in any other
>>>> non-home-made benchmark that people/bots are running. That's really what I
>>>> am questioning.
>>> I don't know, perhaps there isn't a will-it-scale test for this. That's
>>> alright. Even the standard will-it-scale and stress-ng tests people use
>>> to detect regressions usually have glaring problems and are insanely
>>> microbenchey.
>> My theory is that most heavy (high frequency where it would really hit performance)
>> mprotect users (like JITs) perform mprotect on very small ranges (e.g., single page),
>> where all the other overhead (syscall, TLB flush) dominates.
>>
>> That's why I was wondering which use cases that behave similar to the reproducer exist.
>>
>>>> Having that said, I'm all for optimizing it if there is a real problem
>>>> there.
>>>>
>>>>> I don't see how this can justify large performance regressions in a system
>>>>> call, for something every-architecture-not-named-arm64 does not have.
>>>> Take a look at the reported performance improvements on AMD with large
>>>> folios.
>>> Sure, but pte-mapped 2M folios is almost a worst-case (why not a PMD at that
>>> point...)
>> Well, 1M and all the way down will similarly benefit. 2M is just always the extreme case.
>>
>>>> The issue really is that small folios don't perform well, on any
>>>> architecture. But to detect large vs. small folios we need the ... folio.
>>>>
>>>> So once we optimize for small folios (== don't try to detect large folios)
>>>> we'll degrade large folios.
>>> I suspect it's not that huge of a deal. Worst case you can always provide a
>>> software PTE_CONT bit that would e.g be set when mapping a large folio. Or
>>> perhaps "if this pte has a PFN, and the next pte has PFN + 1, then we're
>>> probably in a large folio, thus do the proper batching stuff". I think that
>>> could satisfy everyone. There are heuristics we can use, and perhaps
>>> pte_batch_hint() does not need to be that simple and useless in the !arm64
>>> case then. I'll try to look into a cromulent solution for everyone.
>> Software bits are generally -ENOSPC, but maybe we are lucky on some architectures.
>>
>> We'd run into similar issues like aarch64 when shattering contiguity etc, so
>> there is quite some complexity too it that might not be worth it.
>>
>>> (shower thought: do we always get wins when batching large folios, or do these
>>> need to be of a significant order to get wins?)
>> For mprotect(), I don't know. For fork() and unmap() batching there was always a
>> win even with order-2 folios. (never measured order-1, because they don't apply to
>> anonymous memory)
>>
>> I assume for mprotect() it depends whether we really needed the folio before, or
>> whether it's just not required like for mremap().
>>
>>> But personally I would err on the side of small folios, like we did for mremap()
>>> a few months back.
>> The following (completely untested) might make most people happy by looking up
>> the folio only if (a) required or (b) if the architecture indicates that there is a large folio.
>>
>> I assume for some large folio use cases it might perform worse than before. But for
>> the write-upgrade case with large anon folios the performance improvement should remain.
>>
>> Not sure if some regression would remain for which we'd have to special-case the implementation
>> to take a separate path for nr_ptes == 1.
>>
>> Maybe you had something similar already:
>>
>>
>> diff --git a/mm/mprotect.c b/mm/mprotect.c
>> index c0571445bef7..0b3856ad728e 100644
>> --- a/mm/mprotect.c
>> +++ b/mm/mprotect.c
>> @@ -211,6 +211,25 @@ static void set_write_prot_commit_flush_ptes(struct vm_area_struct *vma,
>> commit_anon_folio_batch(vma, folio, page, addr, ptep, oldpte, ptent, nr_ptes, tlb);
>> }
>> +static bool mprotect_wants_folio_for_pte(unsigned long cp_flags, pte_t *ptep,
>> + pte_t pte, unsigned long max_nr_ptes)
>> +{
>> + /* NUMA hinting needs decide whether working on the folio is ok. */
>> + if (cp_flags & MM_CP_PROT_NUMA)
>> + return true;
>> +
>> + /* We want the folio for possible write-upgrade. */
>> + if (!pte_write(pte) && (cp_flags & MM_CP_TRY_CHANGE_WRITABLE))
>> + return true;
>> +
>> + /* There is nothing to batch. */
>> + if (max_nr_ptes == 1)
>> + return false;
>> +
>> + /* For guaranteed large folios it's usually a win. */
>> + return pte_batch_hint(ptep, pte) > 1;
>> +}
>> +
>> static long change_pte_range(struct mmu_gather *tlb,
>> struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr,
>> unsigned long end, pgprot_t newprot, unsigned long cp_flags)
>> @@ -241,16 +260,18 @@ static long change_pte_range(struct mmu_gather *tlb,
>> const fpb_t flags = FPB_RESPECT_SOFT_DIRTY | FPB_RESPECT_WRITE;
>> int max_nr_ptes = (end - addr) >> PAGE_SHIFT;
>> struct folio *folio = NULL;
>> - struct page *page;
>> + struct page *page = NULL;
>> pte_t ptent;
>> /* Already in the desired state. */
>> if (prot_numa && pte_protnone(oldpte))
>> continue;
>> - page = vm_normal_page(vma, addr, oldpte);
>> - if (page)
>> - folio = page_folio(page);
>> + if (mprotect_wants_folio_for_pte(cp_flags, pte, oldpte, max_nr_ptes)) {
>> + page = vm_normal_page(vma, addr, oldpte);
>> + if (page)
>> + folio = page_folio(page);
>> + }
>> /*
>> * Avoid trapping faults against the zero or KSM
>>
> Yes, this is a better version than what I had, I'll take this hunk if you don't mind :)
> Note that it still doesn't handle large folios on !contpte architectures, which
> is partly the issue. I suspect some sort of PTE lookahead might work well in
> practice, aside from the issues where e.g two order-0 folios that are
> contiguous in memory are separately mapped.
>
> Though perhaps inlining vm_normal_folio() might also be interesting and side-step
> most of the issue. I'll play around with that.

Indeed this is one option.

You can also experiment with
https://lore.kernel.org/all/20250506050056.59250-3-dev.jain@xxxxxxx/
which approximates presence of large folio if pfns are contiguous.

>