[PATCH 3/3] mm: vmscan: shrink_slab: do not skip caches with < batch_size objects
From: Vladimir Davydov
Date: Fri Jan 17 2014 - 14:25:47 EST
In its current implementation, shrink_slab() won't scan caches that have
less than batch_size objects. If there are only a few shrinkers
available, such a behavior won't cause any problems, because the
batch_size is usually small, but if we have a lot of slab shrinkers,
which is perfectly possible since FS shrinkers are now per-superblock,
we can end up with hundreds of megabytes of practically unreclaimable
kmem objects. For instance, mounting a thousand of ext2 FS images with a
hundred of files in each and iterating over all the files using du(1)
will result in about 200 Mb of FS caches that cannot be dropped even
with the aid of the vm.drop_caches sysctl! Fix this.
Signed-off-by: Vladimir Davydov <vdavydov@xxxxxxxxxxxxx>
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Cc: Mel Gorman <mgorman@xxxxxxx>
Cc: Michal Hocko <mhocko@xxxxxxx>
Cc: Johannes Weiner <hannes@xxxxxxxxxxx>
Cc: Rik van Riel <riel@xxxxxxxxxx>
Cc: Dave Chinner <dchinner@xxxxxxxxxx>
Cc: Glauber Costa <glommer@xxxxxxxxx>
---
mm/vmscan.c | 25 +++++++++++++++++++------
1 file changed, 19 insertions(+), 6 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index f6d716d..2e710d4 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -275,7 +275,7 @@ shrink_slab_node(struct shrink_control *shrinkctl, struct shrinker *shrinker,
* a large delta change is calculated directly.
*/
if (delta < freeable / 4)
- total_scan = min(total_scan, freeable / 2);
+ total_scan = min(total_scan, max(freeable / 2, batch_size));
/*
* Avoid risking looping forever due to too large nr value:
@@ -289,21 +289,34 @@ shrink_slab_node(struct shrink_control *shrinkctl, struct shrinker *shrinker,
nr_pages_scanned, lru_pages,
freeable, delta, total_scan);
- while (total_scan >= batch_size) {
+ /*
+ * To avoid CPU cache thrashing, we should not scan less than
+ * batch_size objects in one pass, but if the cache has less
+ * than batch_size objects in total, and we really want to
+ * shrink them all, go ahead and scan what we have, otherwise
+ * last batch_size objects will never get reclaimed.
+ */
+ if (total_scan < batch_size &&
+ !(freeable < batch_size && total_scan >= freeable))
+ goto out;
+
+ do {
unsigned long ret;
+ unsigned long nr_to_scan = min(total_scan, batch_size);
- shrinkctl->nr_to_scan = batch_size;
+ shrinkctl->nr_to_scan = nr_to_scan;
ret = shrinker->scan_objects(shrinker, shrinkctl);
if (ret == SHRINK_STOP)
break;
freed += ret;
- count_vm_events(SLABS_SCANNED, batch_size);
- total_scan -= batch_size;
+ count_vm_events(SLABS_SCANNED, nr_to_scan);
+ total_scan -= nr_to_scan;
cond_resched();
- }
+ } while (total_scan >= batch_size);
+out:
/*
* move the unused scan count back into the shrinker in a
* manner that handles concurrent updates. If we exhausted the
--
1.7.10.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/