[patch v2] mm, vmscan: avoid thrashing anon lru when free + file is low

From: David Rientjes
Date: Mon May 01 2017 - 17:34:30 EST


The purpose of the code that commit 623762517e23 ("revert 'mm: vmscan: do
not swap anon pages just because free+file is low'") reintroduces is to
prefer swapping anonymous memory rather than trashing the file lru.

If the anonymous inactive lru for the set of eligible zones is considered
low, however, or the length of the list for the given reclaim priority
does not allow for effective anonymous-only reclaiming, then avoid
forcing SCAN_ANON. Forcing SCAN_ANON will end up thrashing the small
list and leave unreclaimed memory on the file lrus.

If the inactive list is insufficient, fallback to balanced reclaim so the
file lru doesn't remain untouched.

Suggested-by: Minchan Kim <minchan@xxxxxxxxxx>
Signed-off-by: David Rientjes <rientjes@xxxxxxxxxx>
---
to akpm: this issue has been possible since at least 3.15, so it's
probably not high priority for 4.12 but applies cleanly if it can sneak
in

mm/vmscan.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2204,8 +2204,17 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
}

if (unlikely(pgdatfile + pgdatfree <= total_high_wmark)) {
- scan_balance = SCAN_ANON;
- goto out;
+ /*
+ * Force SCAN_ANON if there are enough inactive
+ * anonymous pages on the LRU in eligible zones.
+ * Otherwise, the small LRU gets thrashed.
+ */
+ if (!inactive_list_is_low(lruvec, false, sc, false) &&
+ lruvec_lru_size(lruvec, LRU_INACTIVE_ANON, sc->reclaim_idx)
+ >> sc->priority) {
+ scan_balance = SCAN_ANON;
+ goto out;
+ }
}
}