[RFC PATCH 2/2] filemap: use high-order folios in filemap sync RA
From: Anatoly Stepanov
Date: Wed Apr 15 2026 - 07:48:03 EST
[Idea]
If a mmap'ed file being accessed such that async RA never
kicks in, we might end up with only 0-order folios in the page cache.
if fault_around_bytes is larger than 1 single page, then
it's beneficial to use high-order folios, which brings significant
filemap_map_pages() speedup.
So, let's just use fault_around_bytes as a starting point here.
if an arch supports PTE-coalescing we can get more of those for free.
(see arm64 example below)
We don't save the new order to "ra->order", so if async RA will happen
it would normally start from order-0.
[Things to be discussed]
But at the same time, i can see drawback for 16K, 64K pages, in this case fault_around will still be 64K by default.
In this case, it seems makes sense to make the fault_around_bytes be like order-N of PAGE_SIZE, not fixed bytes number.
Another issue is - when fault_around=0, but we'd like to use high-order folios for sync_RA, for cont-PTE for example,
For this we can use kind of "max(fault_around_order, cont_pte_order)".
Or introduce some dedicated tunable like "sync_mmap_order".
[Benchmark]
Simple benchmark below reading 100M file in 4M (RA size) chunks
such that async RA doesn't kick in and the page cache ends up being
filled up with 0-order folios.
The patched kernel gives ~3 times increase in throughput,
considering the page cache is filled up at the moment.
The main speedup comes from filemap_map_pages() due to high-order
folios usage.
As a bonus, we get better cont_pte bit coverage for Arm64.
Example:
// Open 100M file and read every 4M chunk, given max_ra=4M
// Perform 10 runs, measure the throughput.
...
char *map = mmap(NULL, filesize, PROT_READ, MAP_PRIVATE, fd, 0);
if (map == MAP_FAILED) {
perror("Error mapping file");
close(fd);
return 1;
}
struct timespec start, end;
clock_gettime(CLOCK_MONOTONIC, &start);
unsigned int size_4M = 4*1024*1024;
unsigned int num_reads = filesize / size_4M;
volatile char val;
for (int i = 0; i < num_reads; i++) {
off_t offset = (off_t)i * size_4M;
val = map[offset];
}
clock_gettime(CLOCK_MONOTONIC, &end);
...
Before patch (last 3 runs):
...
Throughput: 127942.68 operations per second
Throughput: 133646.96 operations per second
Throughput: 134321.94 operations per second
// filemap_map_pages(), fault_around_bytes = 64K
Time per 10 runs: ~2000 usec
// "smaps" numbers for the test file:
Rss: 1600 kB
Private_Clean: 1600 kB
Referenced: 1540 kB
ContPTE: 0 kB
Patched kernel (last 3 runs):
...
Throughput: 366515.17 operations per second
Throughput: 404465.30 operations per second
Throughput: 370535.05 operations per second
// filemap_map_pages(), fault_around_bytes = 64K
Time per 10 runs: ~730 usec
// "smaps" numbers for the test file:
Rss: 1600 kB
Private_Clean: 1600 kB
Referenced: 1540 kB
ContPTE(Rss): 1536 kB
Signed-off-by: Anatoly Stepanov <stepanov.anatoly@xxxxxxxxxx>
---
include/linux/pagemap.h | 1 +
mm/filemap.c | 1 +
mm/internal.h | 1 +
mm/memory.c | 2 +-
mm/readahead.c | 5 +++--
5 files changed, 7 insertions(+), 3 deletions(-)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index ec442af3f..e133a3a6b 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -1359,6 +1359,7 @@ struct readahead_control {
struct file *file;
struct address_space *mapping;
struct file_ra_state *ra;
+ unsigned int sync_mmap_order;
/* private: use the readahead_* accessors instead */
pgoff_t _index;
unsigned int _nr_pages;
diff --git a/mm/filemap.c b/mm/filemap.c
index 406cef06b..1ed5a0688 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3398,6 +3398,7 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
ra->size = ra->ra_pages;
ra->async_size = ra->ra_pages / 4;
ra->order = 0;
+ ractl.sync_mmap_order = __ffs(fault_around_pages);
}
fpin = maybe_unlock_mmap_for_io(vmf, fpin);
diff --git a/mm/internal.h b/mm/internal.h
index cb0af847d..96157c82b 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1770,4 +1770,5 @@ static inline int io_remap_pfn_range_complete(struct vm_area_struct *vma,
return remap_pfn_range_complete(vma, addr, pfn, size, prot);
}
+extern unsigned long fault_around_pages;
#endif /* __MM_INTERNAL_H */
diff --git a/mm/memory.c b/mm/memory.c
index 2f815a34d..57ae027dd 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5670,7 +5670,7 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
return ret;
}
-static unsigned long fault_around_pages __read_mostly =
+unsigned long fault_around_pages __read_mostly =
65536 >> PAGE_SHIFT;
#ifdef CONFIG_DEBUG_FS
diff --git a/mm/readahead.c b/mm/readahead.c
index 7b05082c8..322bc115b 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -476,7 +476,7 @@ void page_cache_ra_order(struct readahead_control *ractl,
unsigned int nofs;
int err = 0;
gfp_t gfp = readahead_gfp_mask(mapping);
- unsigned int new_order = ra->order;
+ unsigned int new_order = max(ra->order, ractl->sync_mmap_order);
trace_page_cache_ra_order(mapping->host, start, ra);
if (!mapping_large_folio_support(mapping)) {
@@ -490,7 +490,8 @@ void page_cache_ra_order(struct readahead_control *ractl,
new_order = min_t(unsigned int, new_order, ilog2(ra->size));
new_order = max(new_order, min_order);
- ra->order = new_order;
+ if (ra->order >= ractl->sync_mmap_order)
+ ra->order = new_order;
/* See comment in page_cache_ra_unbounded() */
nofs = memalloc_nofs_save();
--
2.34.1