On Fri, 4 Jul 2025 11:19:26 +0800 Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> wrote:
After commit acd7ccb284b8 ("mm: shmem: add large folio support for tmpfs"),
tmpfs can also support large folio allocation (not just PMD-sized large
folios).
However, when accessing tmpfs via mmap(), although tmpfs supports large folios,
we still establish mappings at the base page granularity, which is unreasonable.
We can map multiple consecutive pages of a tmpfs folios at once according to
the size of the large folio. On one hand, this can reduce the overhead of page
faults; on the other hand, it can leverage hardware architecture optimizations
to reduce TLB misses, such as contiguous PTEs on the ARM architecture.
Moreover, tmpfs mount will use the 'huge=' option to control large folio
allocation explicitly. So it can be understood that the process's RSS statistics
might increase, and I think this will not cause any obvious effects for users.
Performance test:
I created a 1G tmpfs file, populated with 64K large folios, and write-accessed it
sequentially via mmap(). I observed a significant performance improvement:
That doesn't sound like a crazy thing to do.
Before the patch:
real 0m0.158s
user 0m0.008s
sys 0m0.150s
After the patch:
real 0m0.021s
user 0m0.004s
sys 0m0.017s
And look at that.
diff --git a/mm/memory.c b/mm/memory.c
index 0f9b32a20e5b..9944380e947d 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5383,10 +5383,10 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
/*
* Using per-page fault to maintain the uffd semantics, and same
- * approach also applies to non-anonymous-shmem faults to avoid
+ * approach also applies to non shmem/tmpfs faults to avoid
* inflating the RSS of the process.
*/
- if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma)) ||
+ if (!vma_is_shmem(vma) || unlikely(userfaultfd_armed(vma)) ||
unlikely(needs_fallback)) {
nr_pages = 1;
} else if (nr_pages > 1) {
and that's it?
I'm itching to get this into -stable, really. What LTS user wouldn't
want this?
acd7ccb284b8?