Re: [PATCH REPOST RFC cgroup/for-3.7] cgroup: mark subsystems withbroken hierarchy support and whine if cgroups are nested for them

From: Michal Hocko
Date: Wed Sep 12 2012 - 11:47:47 EST


On Tue 11-09-12 10:07:46, Tejun Heo wrote:
> Hello, Michal.
>
> On Tue, Sep 11, 2012 at 12:04:33PM +0200, Michal Hocko wrote:
> > > cgroup_unlock();
> > > @@ -4953,6 +4958,7 @@ mem_cgroup_create(struct cgroup *cont)
> > > &per_cpu(memcg_stock, cpu);
> > > INIT_WORK(&stock->work, drain_local_stock);
> > > }
> > > + mem_cgroup_subsys.broken_hierarchy = !memcg->use_hierarchy;
> >
> > Hmmm, this will warn even if we have
> > root (default use_hierarchy=0)
> > \
> > A (use_hierarchy=1)
> > \
> > B <- here
> >
> > which is unfortunate because it will add a noise to a reasonable
> > configuration.
>
> I suppose you're talking about having root group not performing any
> accounting and/or control?

It doesn't do any controlling because you cannot set any limit for it.
Root cgroup has always been special.

> I suppose such could be a valid use case
> (is it really necessary tho?) but I don't think .use_hierarchy is the
> right interface for that.

I am not sure I understand what you mean by that. My only concern with
is that we shouldn't complain if somebody doesn't do anything wrong.
And creating a group under root without any other children, no matter
what use_hierarchy says, is a valid use case and we shouldn't make too
much noise about that.
The only difference in such a use case is that hierarchical stats will
include numbers from the group only if root had use_hierarchy==1. There
are no other side effects.

> If it's absolutely necessary, I think it should be a root-only flag
> (even if that ends up using the same code path). Eventually, we
> really want to kill .use_hierarchy, or at least make it to RO 1. As
> it's currently defined, it's just way too confusing.

Agreed on that, definitely.

--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/