[PATCH v14 18/18] memcg: flush memcg items upon memcg destruction
From: Vladimir Davydov
Date: Mon Dec 16 2013 - 07:19:16 EST
From: Glauber Costa <glommer@xxxxxxxxxx>
When a memcg is destroyed, it won't be imediately released until all
objects are gone. This means that if a memcg is restarted with the very
same workload - a very common case, the objects already cached won't be
billed to the new memcg. This is mostly undesirable since a container
can exploit this by restarting itself every time it reaches its limit,
and then coming up again with a fresh new limit.
Since now we have targeted reclaim, I sustain that we should assume that
a memcg that is destroyed should be flushed away. It makes perfect sense
if we assume that a memcg that goes away most likely indicates an
isolated workload that is terminated.
Signed-off-by: Glauber Costa <glommer@xxxxxxxxxx>
Signed-off-by: Vladimir Davydov <vdavydov@xxxxxxxxxxxxx>
Cc: Michal Hocko <mhocko@xxxxxxx>
Cc: Johannes Weiner <hannes@xxxxxxxxxxx>
Cc: Balbir Singh <bsingharora@xxxxxxxxx>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx>
---
mm/memcontrol.c | 28 ++++++++++++++++++++++++++++
1 file changed, 28 insertions(+)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 963285f..28d5472 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6162,12 +6162,40 @@ static void memcg_destroy_kmem(struct mem_cgroup *memcg)
memcg_destroy_all_lrus(memcg);
}
+static void memcg_drop_slab(struct mem_cgroup *memcg)
+{
+ struct shrink_control shrink = {
+ .gfp_mask = GFP_KERNEL,
+ .target_mem_cgroup = memcg,
+ };
+ unsigned long nr_objects;
+
+ nodes_setall(shrink.nodes_to_scan);
+ do {
+ nr_objects = shrink_slab(&shrink, 1000, 1000);
+ } while (nr_objects > 0);
+}
+
static void kmem_cgroup_css_offline(struct mem_cgroup *memcg)
{
if (!memcg_kmem_is_active(memcg))
return;
/*
+ * When a memcg is destroyed, it won't be imediately released until all
+ * objects are gone. This means that if a memcg is restarted with the
+ * very same workload - a very common case, the objects already cached
+ * won't be billed to the new memcg. This is mostly undesirable since a
+ * container can exploit this by restarting itself every time it
+ * reaches its limit, and then coming up again with a fresh new limit.
+ *
+ * Therefore a memcg that is destroyed should be flushed away. It makes
+ * perfect sense if we assume that a memcg that goes away indicates an
+ * isolated workload that is terminated.
+ */
+ memcg_drop_slab(memcg);
+
+ /*
* kmem charges can outlive the cgroup. In the case of slab
* pages, for instance, a page contain objects from various
* processes. As we prevent from taking a reference for every
--
1.7.10.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/