[PATCH] mm: move swap-in anonymous page into active list
From: Minchan Kim
Date: Thu Jul 28 2016 - 23:25:05 EST
Every swap-in anonymous page starts from inactive lru list's head.
It should be activated unconditionally when VM decide to reclaim
because page table entry for the page always usually has marked
accessed bit. Thus, their window size for getting a new referece
is 2 * NR_inactive + NR_active while others is NR_active + NR_active.
It's not fair that it has more chance to be referenced compared
to other newly allocated page which starts from active lru list's
head.
Signed-off-by: Minchan Kim <minchan@xxxxxxxxxx>
---
mm/memory.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/memory.c b/mm/memory.c
index 4425b6059339..3a730b920242 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2642,6 +2642,7 @@ int do_swap_page(struct fault_env *fe, pte_t orig_pte)
if (page == swapcache) {
do_page_add_anon_rmap(page, vma, fe->address, exclusive);
mem_cgroup_commit_charge(page, memcg, true, false);
+ activate_page(page);
} else { /* ksm created a completely new copy */
page_add_new_anon_rmap(page, vma, fe->address, false);
mem_cgroup_commit_charge(page, memcg, false, false);
--
1.9.1