Re: [PATCH] mm: memcontrol: switch to rcu protection in drain_all_stock()

From: Roman Gushchin
Date: Fri Aug 02 2019 - 12:56:24 EST


On Fri, Aug 02, 2019 at 11:33:33AM +0800, Hillf Danton wrote:
>
> On Thu, 1 Aug 2019 16:35:13 -0700 Roman Gushchin wrote:
> >
> > Commit 72f0184c8a00 ("mm, memcg: remove hotplug locking from try_charge")
> > introduced css_tryget()/css_put() calls in drain_all_stock(),
> > which are supposed to protect the target memory cgroup from being
> > released during the mem_cgroup_is_descendant() call.
> >
> > However, it's not completely safe. In theory, memcg can go away
> > between reading stock->cached pointer and calling css_tryget().
>
> Good catch!
> >
> > So, let's read the stock->cached pointer and evaluate the memory
> > cgroup inside a rcu read section, and get rid of
> > css_tryget()/css_put() calls.
>
> You need to either adjust the boundry of the rcu-protected section, or
> retain the call pairs, as the memcg cache is dereferenced again in
> drain_stock().

Not really. drain_stock() is always accessing the local percpu stock, and
stock->cached memcg pointer is protected by references of stocked pages.
Pages are stocked and drained only locally, so they can't go away.
So if (stock->nr_pages > 0), the memcg has at least stock->nr_pages references.

Also, because stocks on other cpus are drained via scheduled work,
neither rcu_read_lock(), not css_tryget()/css_put() protects it.

That's exactly the reason why I think this code is worth changing: it
looks confusing. It looks like css_tryget()/css_put() protect stock
draining, however it's not true.

Thanks!

> >
> > Signed-off-by: Roman Gushchin <guro@xxxxxx>
> > Cc: Michal Hocko <mhocko@xxxxxxxx>
> > ---
> > mm/memcontrol.c | 17 +++++++++--------
> > 1 file changed, 9 insertions(+), 8 deletions(-)
> >
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 5c7b9facb0eb..d856b64426b7 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -2235,21 +2235,22 @@ static void drain_all_stock(struct mem_cgroup *root_memcg)
> > for_each_online_cpu(cpu) {
> > struct memcg_stock_pcp *stock = &per_cpu(memcg_stock, cpu);
> > struct mem_cgroup *memcg;
> > + bool flush = false;
> >
> > + rcu_read_lock();
> > memcg = stock->cached;
> > - if (!memcg || !stock->nr_pages || !css_tryget(&memcg->css))
> > - continue;
> > - if (!mem_cgroup_is_descendant(memcg, root_memcg)) {
> > - css_put(&memcg->css);
> > - continue;
> > - }
> > - if (!test_and_set_bit(FLUSHING_CACHED_CHARGE, &stock->flags)) {
> > + if (memcg && stock->nr_pages &&
> > + mem_cgroup_is_descendant(memcg, root_memcg))
> > + flush = true;
> > + rcu_read_unlock();
> > +
> > + if (flush &&
> > + !test_and_set_bit(FLUSHING_CACHED_CHARGE, &stock->flags)) {
> > if (cpu == curcpu)
> > drain_local_stock(&stock->work);
> > else
> > schedule_work_on(cpu, &stock->work);
> > }
> > - css_put(&memcg->css);
> > }
> > put_cpu();
> > mutex_unlock(&percpu_charge_mutex);
> > --
> > 2.21.0
> >
>
>