Re: [PATCH 0/4] arm64/mm: contpte-sized exec folios for 16K and 64K pages

From: Usama Arif

Date: Fri Mar 13 2026 - 16:56:40 EST


On Fri, 13 Mar 2026 16:33:42 +0000 Ryan Roberts <ryan.roberts@xxxxxxx> wrote:

> On 10/03/2026 14:51, Usama Arif wrote:
> > On arm64, the contpte hardware feature coalesces multiple contiguous PTEs
> > into a single iTLB entry, reducing iTLB pressure for large executable
> > mappings.
> >
> > exec_folio_order() was introduced [1] to request readahead at an
> > arch-preferred folio order for executable memory, enabling contpte
> > mapping on the fault path.
> >
> > However, several things prevent this from working optimally on 16K and
> > 64K page configurations:
> >
> > 1. exec_folio_order() returns ilog2(SZ_64K >> PAGE_SHIFT), which only
> > produces the optimal contpte order for 4K pages. For 16K pages it
> > returns order 2 (64K) instead of order 7 (2M), and for 64K pages it
> > returns order 0 (64K) instead of order 5 (2M).
>
> This was deliberate, although perhaps a bit conservative. I was concerned about
> the possibility of read amplification; pointlessly reading in a load of memory
> that never actually gets used. And that is independent of page size.
>
> 2M seems quite big as a default IMHO, I could imagine Android might complain
> about memory pressure in their 16K config, for example.
>

The force_thp_readahead path in do_sync_mmap_readahead() reads at
HPAGE_PMD_ORDER (2M on x86) and even doubles it to 4M for
non VM_RAND_READ mappings (ra->size *= 2), with async readahead
enabled. exec_folio_order() is more conservative. a single 2M folio
with async_size=0, no speculative prefetch. So I think the memory
pressure would not be worse than what x86 has?

For memory pressure on Android 16K: the readahead is clamped to VMA
boundaries, so a small shared library won't read 2M.
page_cache_ra_order() reduces folio order near EOF and on allocation
failure, so the 2M order is a preference, not a guarantee with the
current code?

> Additionally, ELF files are normally only aligned to 64K and you can only get
> the TLB benefits if the memory is aligned in physical and virtual memory.
>
> > Patch 1 fixes this by
> > using ilog2(CONT_PTES) which evaluates to the optimal order for all
> > page sizes.
> >
> > 2. Even with the optimal order, the mmap_miss heuristic in
> > do_sync_mmap_readahead() silently disables exec readahead after 100
> > page faults. The mmap_miss counter tracks whether readahead is useful
> > for mmap'd file access:
> >
> > - Incremented by 1 in do_sync_mmap_readahead() on every page cache
> > miss (page needed IO).
> >
> > - Decremented by N in filemap_map_pages() for N pages successfully
> > mapped via fault-around (pages found in cache without faulting,
> > evidence that readahead was useful). Only non-workingset pages
> > count and recently evicted and re-read pages don't count as hits.
> >
> > - Decremented by 1 in do_async_mmap_readahead() when a PG_readahead
> > marker page is found (indicates sequential consumption of readahead
> > pages).
> >
> > When mmap_miss exceeds MMAP_LOTSAMISS (100), all readahead is
> > disabled. On 64K pages, both decrement paths are inactive:
> >
> > - filemap_map_pages() is never called because fault_around_pages
> > (65536 >> PAGE_SHIFT = 1) disables should_fault_around(), which
> > requires fault_around_pages > 1. With only 1 page in the
> > fault-around window, there is nothing "around" to map.
> >
> > - do_async_mmap_readahead() never fires for exec mappings because
> > exec readahead sets async_size = 0, so no PG_readahead markers
> > are placed.
> >
> > With no decrements, mmap_miss monotonically increases past
> > MMAP_LOTSAMISS after 100 faults, disabling exec readahead
> > for the remainder of the mapping.
> > Patch 2 fixes this by moving the VM_EXEC readahead block
> > above the mmap_miss check, since exec readahead is targeted (one
> > folio at the fault location, async_size=0) not speculative prefetch.
>
> Interesting!
>
> >
> > 3. Even with correct folio order and readahead, contpte mapping requires
> > the virtual address to be aligned to CONT_PTE_SIZE (2M on 64K pages).
> > The readahead path aligns file offsets and the buddy allocator aligns
> > physical memory, but the virtual address depends on the VMA start.
> > For PIE binaries, ASLR randomizes the load address at PAGE_SIZE (64K)
> > granularity, giving only a 1/32 chance of 2M alignment. When
> > misaligned, contpte_set_ptes() never sets the contiguous PTE bit for
> > any folio in the VMA, resulting in zero iTLB coalescing benefit.
> >
> > Patch 3 fixes this for the main binary by bumping the ELF loader's
> > alignment to PAGE_SIZE << exec_folio_order() for ET_DYN binaries.
> >
> > Patch 4 fixes this for shared libraries by adding a contpte-size
> > alignment fallback in thp_get_unmapped_area_vmflags(). The existing
> > PMD_SIZE alignment (512M on 64K pages) is too large for typical shared
> > libraries, so this smaller fallback (2M) succeeds where PMD fails.
>
> I don't see how you can reliably influence this from the kernel? The ELF file
> alignment is, by default, 64K (16K on Android) and there is no guarrantee that
> the text section is the first section in the file. You need to align the start
> of the text section to the 2M boundary and to do that, you'll need to align the
> start of the file to some 64K boundary at a specific offset to the 2M boundary,
> based on the size of any sections before the text section. That's a job for the
> dynamic loader I think? Perhaps I've misunderstood what you're doing...
>

I only started looking into how this works a few days before sending these
patches, so I could be wrong (please do correct me if thats the case!)

For the main binary (patch 3): load_elf_binary() controls load_bias.
Each PT_LOAD segment is mapped at load_bias + p_vaddr via elf_map().
The alignment variable feeds directly into load_bias calculation.
If p_vaddr=0 and p_offset=0, mapped_addr = load_bias + 0 = load_bias. By
ensuring load_bias is folio size aligned, the text segment's virtual address
is also folio size aligned.

For shared libraries (patch 4): ld.so loads these via mmap(), and the
kernel's get_unmapped_area callback (thp_get_unmapped_area for ext4,
xfs, btrfs) picks the virtual address. The existing code tries
PMD_SIZE alignment first (512M on 64K pages), which is too large for
typical shared libraries and always fails. Patch 4 adds a fallback
that tries folio-size alignment (2M), which is small enough to succeed
for most libraries.

> >
> > I created a benchmark that mmaps a large executable file and calls
> > RET-stub functions at PAGE_SIZE offsets across it. "Cold" measures
> > fault + readahead cost. "Random" first faults in all pages with a
> > sequential sweep (not measured), then measures time for calling random
> > offsets, isolating iTLB miss cost for scattered execution.
> >
> > The benchmark results on Neoverse V2 (Grace), arm64 with 64K base pages,
> > 512MB executable file on ext4, averaged over 3 runs:
> >
> > Phase | Baseline | Patched | Improvement
> > -----------|--------------|--------------|------------------
> > Cold fault | 83.4 ms | 41.3 ms | 50% faster
> > Random | 76.0 ms | 58.3 ms | 23% faster
>
> I think the proper way to do this is to link the text section with 2M alignment
> and have the dynamic linker mark the region with MADV_HUGEPAGE?
>

On arm64 with 64K pages, the force_thp_readahead path triggered by
MADV_HUGEPAGE reads at HPAGE_PMD_ORDER (512M). Even with file and
anon khugepaged support aded for khugpaged, the collapse won't happen
form the start.

Yes I think dynamic linker is also a good alternate approach from Wangs
patches [1]. But doing it in the kernel would be more transparent?

[1] https://sourceware.org/pipermail/libc-alpha/2026-March/175776.html

> Thanks,
> Ryan
>
>
> >
> > [1] https://lore.kernel.org/all/20250430145920.3748738-6-ryan.roberts@xxxxxxx/
> >
> > Usama Arif (4):
> > arm64: request contpte-sized folios for exec memory
> > mm: bypass mmap_miss heuristic for VM_EXEC readahead
> > elf: align ET_DYN base to exec folio order for contpte mapping
> > mm: align file-backed mmap to exec folio order in
> > thp_get_unmapped_area
> >
> > arch/arm64/include/asm/pgtable.h | 9 ++--
> > fs/binfmt_elf.c | 15 +++++++
> > mm/filemap.c | 72 +++++++++++++++++---------------
> > mm/huge_memory.c | 17 ++++++++
> > 4 files changed, 75 insertions(+), 38 deletions(-)
> >
>
>