[PATCH v3 0/4] mm: improve large folio readahead and alignment for exec memory
From: Usama Arif
Date: Thu Apr 02 2026 - 14:13:44 EST
v2 -> v3: https://lore.kernel.org/all/20260320140315.979307-1-usama.arif@xxxxxxxxx/
- Take into account READ_ONLY_THP_FOR_FS for elf alignment by aligning
to HPAGE_PMD_SIZE limited to 2M (Rui)
- Reviewed-by tags for patch 1 from Kiryl and Jan
- Remove preferred_exec_order() (Jan)
- Change ra->order to HPAGE_PMD_ORDER if vma_pages(vma) >= HPAGE_PMD_NR
otherwise use exec_folio_order() with gfp &= ~__GFP_RECLAIM for
do_sync_mmap_readahead().
- Change exec_folio_order() to return 2M (cont-pte size) for 64K base
page size for arm64.
- remove bprm->file NULL check (Matthew)
- Change filp to file (Matthew)
- Improve checking of p_vaddr and p_vaddr (Rui and Matthew)
v1 -> v2: https://lore.kernel.org/all/20260310145406.3073394-1-usama.arif@xxxxxxxxx/
- disable mmap_miss logic for VM_EXEC (Jan Kara)
- Align in elf only when segment VA and file offset are already aligned (Rui)
- preferred_exec_order() for VM_EXEC sync mmap_readahead which takes into
account zone high watermarks (as an approximation of memory pressure)
(David, or atleast my approach to what David suggested in [1] :))
- Extend max alignment to mapping_max_folio_size() instead of
exec_folio_order()
Motiviation
===========
exec_folio_order() was introduced [2] to request readahead at an
arch-preferred folio order for executable memory, enabling hardware PTE
coalescing (e.g. arm64 contpte) and PMD mappings on the fault path.
However, several things prevent this from working optimally:
1. The mmap_miss heuristic in do_sync_mmap_readahead() silently disables
exec readahead after 100 page faults. The mmap_miss counter tracks
whether readahead is useful for mmap'd file access:
- Incremented by 1 in do_sync_mmap_readahead() on every page cache
miss (page needed IO).
- Decremented by N in filemap_map_pages() for N pages successfully
mapped via fault-around (pages found in cache without faulting,
evidence that readahead was useful). Only non-workingset pages
count and recently evicted and re-read pages don't count as hits.
- Decremented by 1 in do_async_mmap_readahead() when a PG_readahead
marker page is found (indicates sequential consumption of readahead
pages).
When mmap_miss exceeds MMAP_LOTSAMISS (100), all readahead is
disabled. On arm64 with 64K pages, both decrement paths are inactive:
- filemap_map_pages() is never called because fault_around_pages
(65536 >> PAGE_SHIFT = 1) disables should_fault_around(), which
requires fault_around_pages > 1. With only 1 page in the
fault-around window, there is nothing "around" to map.
- do_async_mmap_readahead() never fires for exec mappings because
exec readahead sets async_size = 0, so no PG_readahead markers
are placed.
With no decrements, mmap_miss monotonically increases past
MMAP_LOTSAMISS after 100 faults, disabling exec readahead
for the remainder of the mapping. Patch 1 fixes this by excluding
VM_EXEC VMAs from the mmap_miss logic, similar to how VM_SEQ_READ
is already excluded.
2. exec_folio_order() is an arch-specific hook that returns a static
order (ilog2(SZ_64K >> PAGE_SHIFT)), which is suboptimal for non-4K
page sizes. Patch 2 updates the arm64 exec_folio_order() to return
2M on 64K page configurations (for contpte coalescing, where the
previous SZ_64K value collapsed to order 0) and uses a tiered
allocation strategy in do_sync_mmap_readahead(): if the VMA is large
enough for a full PMD, request HPAGE_PMD_ORDER so the folio can be
PMD-mapped; otherwise fall back to exec_folio_order() for hardware
PTE coalescing. The allocation uses ~__GFP_RECLAIM so it is
opportunistic, falling back to smaller folios without stalling on
reclaim or compaction.
3. Even with correct folio order and readahead, hardware PTE coalescing
(e.g. contpte) and PMD mapping require the virtual address to be
aligned to the folio size. The readahead path aligns file offsets and
the buddy allocator aligns physical memory, but the virtual address
depends on the VMA start. For PIE binaries, ASLR randomizes the load
address at PAGE_SIZE granularity, so on arm64 with 64K pages only
1/32 of load addresses are 2M-aligned. When misaligned, contpte
cannot be used for any folio in the VMA.
Patch 3 fixes this for the main binary by extending maximum_alignment()
in the ELF loader with a folio_alignment() helper that tries two
tiers matching the readahead strategy: first HPAGE_PMD_SIZE for PMD
mapping, then exec_folio_order() as a fallback for hardware TLB
coalescing. The alignment is capped to the segment size to avoid
reducing ASLR entropy for small binaries.
Patch 4 fixes this for shared libraries by adding an
exec_folio_order() alignment fallback in
thp_get_unmapped_area_vmflags(). The existing PMD_SIZE alignment
(512M on arm64 64K pages) is too large for typical shared libraries,
so this smaller fallback succeeds where PMD fails.
I created a benchmark that mmaps a large executable file and calls
RET-stub functions at PAGE_SIZE offsets across it. "Cold" measures
fault + readahead cost. "Random" first faults in all pages with a
sequential sweep (not measured), then measures time for calling random
offsets, isolating iTLB miss cost for scattered execution.
The benchmark results on Neoverse V2 (Grace), arm64 with 64K base pages,
512MB executable file on ext4, averaged over 3 runs:
Phase | Baseline | Patched | Improvement
-----------|--------------|--------------|------------------
Cold fault | 83.4 ms | 41.3 ms | 50% faster
Random | 76.0 ms | 58.3 ms | 23% faster
[1] https://lore.kernel.org/all/d72d5ca3-4b92-470e-9f89-9f39a3975f1e@xxxxxxxxxx/
[2] https://lore.kernel.org/all/20250430145920.3748738-6-ryan.roberts@xxxxxxx/
Usama Arif (4):
mm: bypass mmap_miss heuristic for VM_EXEC readahead
mm: use tiered folio allocation for VM_EXEC readahead
elf: align ET_DYN base for PTE coalescing and PMD mapping
mm: align file-backed mmap to exec folio order in
thp_get_unmapped_area
arch/arm64/include/asm/pgtable.h | 16 ++++++----
fs/binfmt_elf.c | 50 ++++++++++++++++++++++++++++++++
mm/filemap.c | 42 +++++++++++++++++++--------
mm/huge_memory.c | 13 +++++++++
mm/internal.h | 3 +-
mm/readahead.c | 7 ++---
6 files changed, 109 insertions(+), 22 deletions(-)
--
2.52.0