Re: [PATCH] mm: memcg/slab: fix percpu slab vmstats flushing
From: Johannes Weiner
Date: Thu Dec 19 2019 - 15:15:34 EST
On Wed, Dec 18, 2019 at 03:05:01PM -0800, Roman Gushchin wrote:
> Currently slab percpu vmstats are flushed twice: during the memcg
> offlining and just before freeing the memcg structure.
Please explain here why this double flushing is done. You allude to it
below in how it goes wrong, but it'd be better to describe the intent
when describing the current implementation, to be clear about the
trade offs we are making with this patch.
> Each time percpu counters are summed, added to the atomic counterparts
> and propagated up by the cgroup tree.
>
> The problem is that percpu counters are not zeroed after the first
> flushing. So every cached percpu value is summed twice. It creates
> a small error (up to 32 pages per cpu, but usually less) which
> accumulates on parent cgroup level. After creating and destroying
> of thousands of child cgroups, slab counter on parent level can
> be way off the real value.
>
> For now, let's just stop flushing slab counters on memcg offlining.
> It can't be done correctly without scheduling a work on each cpu:
> reading and zeroing it during css offlining can race with an
> asynchronous update, which doesn't expect values to be changed
> underneath.
>
> With this change, slab counters on parent level will become eventually
> consistent. Once all dying children are gone, values are correct.
> And if not, the error is capped by 32 * NR_CPUS pages per dying
> cgroup.
>
> It's not perfect, as slab are reparented, so any updates after
> the reparenting will happen on the parent level. It means that
> if a slab page was allocated, a counter on child level was bumped,
> then the page was reparented and freed, the annihilation of positive
> and negative counter values will not happen until the child cgroup is
> released. It makes slab counters different from others, and it might
> want us to implement flushing in a correct form again.
> But it's also a question of performance: scheduling a work on each
> cpu isn't free, and it's an open question if the benefit of having
> more accurate counters is worth it.
>
> We might also consider flushing all counters on offlining, not only
> slab counters.
>
> So let's fix the main problem now: make the slab counters eventually
> consistent, so at least the error won't grow with uptime (or more
> precisely the number of created and destroyed cgroups). And think
> about the accuracy of counters separately.
>
> Signed-off-by: Roman Gushchin <guro@xxxxxx>
> Fixes: bee07b33db78 ("mm: memcontrol: flush percpu slab vmstats on kmem offlining")
> Cc: stable@xxxxxxxxxxxxxxx
Other than that, the change looks reasonable to me.
Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx>