Re: query: [PATCH 2/2] cgroup: Remove call to synchronize_rcu incgroup_attach_task

From: Mike Galbraith
Date: Wed Apr 13 2011 - 12:57:11 EST


On Wed, 2011-04-13 at 15:16 +0200, Paul Menage wrote:
> On Wed, Apr 13, 2011 at 5:11 AM, Mike Galbraith <efault@xxxxxx> wrote:
> > If the user _does_ that rmdir(), it's more or less back to square one.
> > RCU grace periods should not impact userland, but if you try to do
> > create/attach/detach/destroy, you run into the same bottleneck, as does
> > any asynchronous GC, though that's not such a poke in the eye. I tried
> > a straight forward move to schedule_work(), and it seems to work just
> > fine. rmdir() no longer takes ~30ms on my box, but closer to 20us.
>
> > + /*
> > + * Release the subsystem state objects.
> > + */
> > + for_each_subsys(cgrp->root, ss)
> > + ss->destroy(ss, cgrp);
> > +
> > + cgrp->root->number_of_cgroups--;
> > + mutex_unlock(&cgroup_mutex);
> > +
> > + /*
> > + * Drop the active superblock reference that we took when we
> > + * created the cgroup
> > + */
> > + deactivate_super(cgrp->root->sb);
> > +
> > + /*
> > + * if we're getting rid of the cgroup, refcount should ensure
> > + * that there are no pidlists left.
> > + */
> > + BUG_ON(!list_empty(&cgrp->pidlists));
> > +
> > + kfree(cgrp);
>
> We might want to punt this through RCU again, in case the subsystem
> destroy() callbacks left anything around that was previously depending
> on the RCU barrier.
>
> Also, I'd be concerned that subsystems might get confused by the fact
> that a new group called 'foo' could be created before the old 'foo'
> has been cleaned up? (And do any subsystems rely on being able to
> access the cgroup dentry up until the point when destroy() is called?

Yeah, I already have head scratching sessions planned for these, why I
used 'seems' to work fine, and Not-signed-off-by: :)

-Mike

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/