[RFC 2/3] mm: process_mrelease: skip LRU movement for exclusive file folios

From: Minchan Kim

Date: Mon Apr 13 2026 - 18:43:27 EST


For the process_mrelease reclaim, skip LRU handling for exclusive
file-backed folios since they will be freed soon so pointless
to move around in the LRU.

This avoids costly LRU movement which accounts for a significant portion
of the time during unmap_page_range.

- 91.31% 0.00% mmap_exit_test [kernel.kallsyms] [.] exit_mm
exit_mm
__mmput
exit_mmap
unmap_vmas
- unmap_page_range
- 55.75% folio_mark_accessed
+ 48.79% __folio_batch_add_and_move
4.23% workingset_activation
+ 12.94% folio_remove_rmap_ptes
+ 9.86% page_table_check_clear
+ 3.34% tlb_flush_mmu
1.06% __page_table_check_pte_clear

Signed-off-by: Minchan Kim <minchan@xxxxxxxxxx>
---
mm/memory.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/mm/memory.c b/mm/memory.c
index 2f815a34d924..25e17893c919 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1640,6 +1640,8 @@ static __always_inline void zap_present_folio_ptes(struct mmu_gather *tlb,
bool delay_rmap = false;

if (!folio_test_anon(folio)) {
+ bool skip_mark_accessed;
+
ptent = get_and_clear_full_ptes(mm, addr, pte, nr, tlb->fullmm);
if (pte_dirty(ptent)) {
folio_mark_dirty(folio);
@@ -1648,7 +1650,16 @@ static __always_inline void zap_present_folio_ptes(struct mmu_gather *tlb,
*force_flush = true;
}
}
- if (pte_young(ptent) && likely(vma_has_recency(vma)))
+
+ /*
+ * For the process_mrelease reclaim, skip LRU handling for exclusive
+ * file-backed folios since they will be freed soon so pointless
+ * to move around in the LRU.
+ */
+ skip_mark_accessed = mm_flags_test(MMF_UNSTABLE, mm) &&
+ folio_mapcount(folio) < 2;
+ if (likely(!skip_mark_accessed) && pte_young(ptent) &&
+ likely(vma_has_recency(vma)))
folio_mark_accessed(folio);
rss[mm_counter(folio)] -= nr;
} else {
--
2.54.0.rc0.605.g598a273b03-goog