Re: [PATCH v5 11/13] mm: Iterate only over charged shrinkers during memcg shrink_slab()

From: Vladimir Davydov
Date: Thu May 17 2018 - 08:54:40 EST


On Thu, May 17, 2018 at 02:49:26PM +0300, Kirill Tkhai wrote:
> On 17.05.2018 07:16, Vladimir Davydov wrote:
> > On Tue, May 15, 2018 at 05:49:59PM +0300, Kirill Tkhai wrote:
> >>>> @@ -589,13 +647,7 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
> >>>> .memcg = memcg,
> >>>> };
> >>>>
> >>>> - /*
> >>>> - * If kernel memory accounting is disabled, we ignore
> >>>> - * SHRINKER_MEMCG_AWARE flag and call all shrinkers
> >>>> - * passing NULL for memcg.
> >>>> - */
> >>>> - if (memcg_kmem_enabled() &&
> >>>> - !!memcg != !!(shrinker->flags & SHRINKER_MEMCG_AWARE))
> >>>> + if (!!memcg != !!(shrinker->flags & SHRINKER_MEMCG_AWARE))
> >>>> continue;
> >>>
> >>> I want this check gone. It's easy to achieve, actually - just remove the
> >>> following lines from shrink_node()
> >>>
> >>> if (global_reclaim(sc))
> >>> shrink_slab(sc->gfp_mask, pgdat->node_id, NULL,
> >>> sc->priority);
> >>
> >> This check is not related to the patchset.
> >
> > Yes, it is. This patch modifies shrink_slab which is used only by
> > shrink_node. Simplifying shrink_node along the way looks right to me.
>
> shrink_slab() is used not only in this place.

drop_slab_node() doesn't really count as it is an extract from shrink_node()

> I does not seem a trivial change for me.
>
> >> Let's don't mix everything in the single series of patches, because
> >> after your last remarks it will grow at least up to 15 patches.
> >
> > Most of which are trivial so I don't see any problem here.
> >
> >> This patchset can't be responsible for everything.
> >
> > I don't understand why you balk at simplifying the code a bit while you
> > are patching related functions anyway.
>
> Because this function is used in several places, and we have some particulars
> on root_mem_cgroup initialization, and this function called from these places
> with different states of root_mem_cgroup. It does not seem trivial fix for me.

Let me do it for you then:

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 9b697323a88c..e778569538de 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -486,10 +486,8 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
* @nid is passed along to shrinkers with SHRINKER_NUMA_AWARE set,
* unaware shrinkers will receive a node id of 0 instead.
*
- * @memcg specifies the memory cgroup to target. If it is not NULL,
- * only shrinkers with SHRINKER_MEMCG_AWARE set will be called to scan
- * objects from the memory cgroup specified. Otherwise, only unaware
- * shrinkers are called.
+ * @memcg specifies the memory cgroup to target. Unaware shrinkers
+ * are called only if it is the root cgroup.
*
* @priority is sc->priority, we take the number of objects and >> by priority
* in order to get the scan target.
@@ -554,6 +552,7 @@ void drop_slab_node(int nid)
struct mem_cgroup *memcg = NULL;

freed = 0;
+ memcg = mem_cgroup_iter(NULL, NULL, NULL);
do {
freed += shrink_slab(GFP_KERNEL, nid, memcg, 0);
} while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL);
@@ -2557,9 +2556,8 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc)
shrink_node_memcg(pgdat, memcg, sc, &lru_pages);
node_lru_pages += lru_pages;

- if (memcg)
- shrink_slab(sc->gfp_mask, pgdat->node_id,
- memcg, sc->priority);
+ shrink_slab(sc->gfp_mask, pgdat->node_id,
+ memcg, sc->priority);

/* Record the group's reclaim efficiency */
vmpressure(sc->gfp_mask, memcg, false,
@@ -2583,10 +2581,6 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc)
}
} while ((memcg = mem_cgroup_iter(root, memcg, &reclaim)));

- if (global_reclaim(sc))
- shrink_slab(sc->gfp_mask, pgdat->node_id, NULL,
- sc->priority);
-
if (reclaim_state) {
sc->nr_reclaimed += reclaim_state->reclaimed_slab;
reclaim_state->reclaimed_slab = 0;


Seems simple enough to fold it into this patch, doesn't it?