Re: [RFC PATCH 1/8] arm64/hugetlb: Extend batching of multiple CONT_PTE in a single PTE setup
From: Dev Jain
Date: Wed Apr 08 2026 - 06:34:59 EST
On 08/04/26 8:21 am, Barry Song (Xiaomi) wrote:
> For sizes aligned to CONT_PTE_SIZE and smaller than PMD_SIZE,
> we can batch CONT_PTE settings instead of handling them individually.
>
> Signed-off-by: Barry Song (Xiaomi) <baohua@xxxxxxxxxx>
> ---
> arch/arm64/mm/hugetlbpage.c | 10 ++++++++++
> 1 file changed, 10 insertions(+)
>
> diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
> index a42c05cf5640..bf31c11ebd3b 100644
> --- a/arch/arm64/mm/hugetlbpage.c
> +++ b/arch/arm64/mm/hugetlbpage.c
> @@ -110,6 +110,12 @@ static inline int num_contig_ptes(unsigned long size, size_t *pgsize)
> contig_ptes = CONT_PTES;
> break;
> default:
> + if (size < CONT_PMD_SIZE && size > 0 &&
> + IS_ALIGNED(size, CONT_PTE_SIZE)) {
Nit: Having the lower bound check before upper bound is natural to
read, so this should be size > 0 && size < CONT_PMD_SIZE (i.e written
the other way around).
Also IS_ALIGNED needs to go below size.
> + contig_ptes = size >> PAGE_SHIFT;
> + *pgsize = PAGE_SIZE;
> + break;
> + }
> WARN_ON(!__hugetlb_valid_size(size));
> }
>
> @@ -359,6 +365,10 @@ pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags)
> case CONT_PTE_SIZE:
> return pte_mkcont(entry);
> default:
> + if (pagesize < CONT_PMD_SIZE && pagesize > 0 &&
> + IS_ALIGNED(pagesize, CONT_PTE_SIZE))
> + return pte_mkcont(entry);
> +
> break;
> }
> pr_warn("%s: unrecognized huge page size 0x%lx\n",