Re: [PATCH v2 2/2] mm/mprotect: special-case small folios when applying write permissions
From: David Hildenbrand (Arm)
Date: Tue Mar 24 2026 - 16:21:04 EST
On 3/24/26 16:43, Pedro Falcato wrote:
> The common order-0 case is important enough to want its own branch, and
> avoids the hairy, large loop logic that the CPU does not seem to handle
> particularly well.
>
> While at it, encourage the compiler to inline batch PTE logic and resolve
> constant branches by adding __always_inline strategically.
>
> Reviewed-by: Lorenzo Stoakes (Oracle) <ljs@xxxxxxxxxx>
> Signed-off-by: Pedro Falcato <pfalcato@xxxxxxx>
> ---
> mm/mprotect.c | 17 ++++++++++++-----
> 1 file changed, 12 insertions(+), 5 deletions(-)
>
> diff --git a/mm/mprotect.c b/mm/mprotect.c
> index 2eaf862e5734..2fda26107066 100644
> --- a/mm/mprotect.c
> +++ b/mm/mprotect.c
> @@ -103,7 +103,7 @@ bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long addr,
> return can_change_shared_pte_writable(vma, pte);
> }
>
> -static int mprotect_folio_pte_batch(struct folio *folio, pte_t *ptep,
> +static __always_inline int mprotect_folio_pte_batch(struct folio *folio, pte_t *ptep,
> pte_t pte, int max_nr_ptes, fpb_t flags)
> {
> /* No underlying folio, so cannot batch */
> @@ -117,9 +117,9 @@ static int mprotect_folio_pte_batch(struct folio *folio, pte_t *ptep,
> }
>
> /* Set nr_ptes number of ptes, starting from idx */
> -static void prot_commit_flush_ptes(struct vm_area_struct *vma, unsigned long addr,
> - pte_t *ptep, pte_t oldpte, pte_t ptent, int nr_ptes,
> - int idx, bool set_write, struct mmu_gather *tlb)
> +static __always_inline void prot_commit_flush_ptes(struct vm_area_struct *vma,
> + unsigned long addr, pte_t *ptep, pte_t oldpte, pte_t ptent,
> + int nr_ptes, int idx, bool set_write, struct mmu_gather *tlb)
> {
> /*
> * Advance the position in the batch by idx; note that if idx > 0,
> @@ -169,7 +169,7 @@ static int page_anon_exclusive_sub_batch(int start_idx, int max_len,
> * pte of the batch. Therefore, we must individually check all pages and
> * retrieve sub-batches.
> */
> -static void commit_anon_folio_batch(struct vm_area_struct *vma,
> +static __always_inline void commit_anon_folio_batch(struct vm_area_struct *vma,
> struct folio *folio, struct page *first_page, unsigned long addr, pte_t *ptep,
> pte_t oldpte, pte_t ptent, int nr_ptes, struct mmu_gather *tlb)
> {
> @@ -177,6 +177,13 @@ static void commit_anon_folio_batch(struct vm_area_struct *vma,
> int sub_batch_idx = 0;
> int len;
>
> + /* Optimize for the common order-0 case. */
> + if (likely(nr_ptes == 1)) {
> + prot_commit_flush_ptes(vma, addr, ptep, oldpte, ptent, 1,
> + 0, PageAnonExclusive(first_page), tlb);
To optimize that one, inlining prot_commit_flush_ptes() would be
sufficient. Does inlining the other two really help? I don't think we
can optimize out loops etc. for them?
I would have thought that specializing on nr_ptes==0 on an even higher
level--where we call
set_write_prot_commit_flush_ptes/prot_commit_flush_ptes() would allow
for optimizing the loops entirely for nr_ptes==0?
--
Cheers,
David