Re: [PATCH] sched: fix another race when reading /proc/sched_debug
From: Peter Zijlstra
Date: Mon Dec 15 2008 - 05:44:18 EST
On Mon, 2008-12-15 at 17:51 +0800, Li Zefan wrote:
> >>> Can't we detect a dead task-group and skip those instead of adding this
> >>> global lock?
> >>>
> >> I tried it, but I don't think it's feasable, without lock syncronization:
> >>
> >> | print_cfs_rq()
> >> | check task_group is dead
> >> cgroup_diput() |
> >> .. |
> >> mark task_group as dead |
> >> .. |
> >> kfree(cgrp) |
> >> | call cgroup_path()
> >
> > rcu free cgrp
> >
>
> I got your point, thanks.
>
> Another way is use css_tryget(), and thus can avoid touching cgroup.c and adding
> synchronize_rcu(). css_tryget() is proposed by Kamezawa but I think won't be
> available until 2.6.29.
>
> Anyway, here is the fix. I'll post a complete version with changelog when we
> agree on how to fix it.
Yeah, the below looks like a suitable fix.
> Signed-off-by: Li Zefan <lizf@xxxxxxxxxxxxxx>
> ---
> kernel/cgroup.c | 6 ++++++
> kernel/sched_debug.c | 17 +++++++++++++++--
> 2 files changed, 21 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/cgroup.c b/kernel/cgroup.c
> index fe00b3b..3c54d1b 100644
> --- a/kernel/cgroup.c
> +++ b/kernel/cgroup.c
> @@ -624,6 +624,12 @@ static void cgroup_diput(struct dentry *dentry, struct inode *inode)
> * created the cgroup */
> deactivate_super(cgrp->root->sb);
>
> + /*
> + * Some subsystems (cpu cgroup) might still be able to
> + * accessing the cgroup in rcu section.
> + */
> + synchronize_rcu();
> +
> kfree(cgrp);
> }
> iput(inode);
Is there any reason we cannot use call_rcu() ?
> diff --git a/kernel/sched_debug.c b/kernel/sched_debug.c
> index 26ed8e3..174c072 100644
> --- a/kernel/sched_debug.c
> +++ b/kernel/sched_debug.c
> @@ -127,8 +127,14 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq)
> if (tg)
> cgroup = tg->css.cgroup;
>
> - if (cgroup)
> + if (cgroup) {
> + /*
> + * This task_group is dead or we race with cgroup creating.
> + */
> + if (cgroup_is_removed(cgroup) || !cgroup->dentry)
> + return;
> cgroup_path(cgroup, path, sizeof(path));
> + }
Perhaps wrap that check in a cgroup_*() helper? That would avoid the
duplication, be clearer and not open code the ->dentry assumption.
cgroup_is_active() perhaps?
> SEQ_printf(m, "\ncfs_rq[%d]:%s\n", cpu, path);
> #else
> @@ -181,8 +187,15 @@ void print_rt_rq(struct seq_file *m, int cpu, struct rt_rq *rt_rq)
> if (tg)
> cgroup = tg->css.cgroup;
>
> - if (cgroup)
> + if (cgroup) {
> + /*
> + * This task_group is dead or we race with cgroup creating.
> + */
> + if (cgroup_is_removed(cgroup) || !cgroup->dentry)
> + return;
> +
> cgroup_path(cgroup, path, sizeof(path));
> + }
>
> SEQ_printf(m, "\nrt_rq[%d]:%s\n", cpu, path);
> #else
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/