Re: [PATCH 1/2] perf_events: add cgroup support (v8)

From: Stephane Eranian
Date: Tue Feb 08 2011 - 17:31:13 EST


Peter,

See comments below.


On Mon, Feb 7, 2011 at 5:10 PM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> Compile tested only, depends on the cgroup::exit patch
>
> --- linux-2.6.orig/include/linux/perf_event.h
> +++ linux-2.6/include/linux/perf_event.h
> @@ -905,6 +929,9 @@ struct perf_cpu_context {
>    Âstruct list_head        Ârotation_list;
>    Âint               jiffies_interval;
>    Âstruct pmu           Â*active_pmu;
> +#ifdef CONFIG_CGROUP_PERF
> +    struct perf_cgroup       Â*cgrp;
> +#endif
> Â};
>
I don't quite understand the motivation for adding cgrp to cpuctx.

> --- linux-2.6.orig/kernel/perf_event.c
> +++ linux-2.6/kernel/perf_event.c
> +static inline void update_cgrp_time_from_cpuctx(struct perf_cpu_context *cpuctx)
> +{
> + Â Â Â struct perf_cgroup *cgrp_out = cpuctx->cgrp;
> + Â Â Â if (cgrp_out)
> + Â Â Â Â Â Â Â __update_cgrp_time(cgrp_out);
> +}
> +
What's the benefit of this form compared to the original from_task() version?

> +void perf_cgroup_switch(struct task_struct *task, int mode)
> +{
> + Â Â Â struct perf_cpu_context *cpuctx;
> + Â Â Â struct pmu *pmu;
> + Â Â Â unsigned long flags;
> +
> + Â Â Â /*
> + Â Â Â Â* disable interrupts to avoid geting nr_cgroup
> + Â Â Â Â* changes via __perf_event_disable(). Also
> + Â Â Â Â* avoids preemption.
> + Â Â Â Â*/
> + Â Â Â local_irq_save(flags);
> +
> + Â Â Â /*
> + Â Â Â Â* we reschedule only in the presence of cgroup
> + Â Â Â Â* constrained events.
> + Â Â Â Â*/
> + Â Â Â rcu_read_lock();
> +
> + Â Â Â list_for_each_entry_rcu(pmu, &pmus, entry) {
> +
> + Â Â Â Â Â Â Â cpuctx = this_cpu_ptr(pmu->pmu_cpu_context);
> +
> + Â Â Â Â Â Â Â perf_pmu_disable(cpuctx->ctx.pmu);
> +
> + Â Â Â Â Â Â Â /*
> + Â Â Â Â Â Â Â Â* perf_cgroup_events says at least one
> + Â Â Â Â Â Â Â Â* context on this CPU has cgroup events.
> + Â Â Â Â Â Â Â Â*
> + Â Â Â Â Â Â Â Â* ctx->nr_cgroups reports the number of cgroup
> + Â Â Â Â Â Â Â Â* events for a context.
> + Â Â Â Â Â Â Â Â*/
> + Â Â Â Â Â Â Â if (cpuctx->ctx.nr_cgroups > 0) {
> +
> + Â Â Â Â Â Â Â Â Â Â Â if (mode & PERF_CGROUP_SWOUT)
> + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â cpu_ctx_sched_out(cpuctx, EVENT_ALL);
> +
> + Â Â Â Â Â Â Â Â Â Â Â if (mode & PERF_CGROUP_SWIN) {
> + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â cpu_ctx_sched_in(cpuctx, EVENT_ALL, task, 1);
> + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â cpuctx->cgrp = perf_cgroup_from_task(task);
> + Â Â Â Â Â Â Â Â Â Â Â }
> + Â Â Â Â Â Â Â }
I think there is a risk on cpuctx->cgrp pointing to stale cgrp information.
Shouldn't we also set cpuctx->cgrp = NULL on SWOUT?

> +static int __perf_cgroup_move(void *info)
> +{
> + Â Â Â struct task_struct *task = info;
> + Â Â Â perf_cgroup_switch(task, PERF_CGROUP_SWOUT | PERF_CGROUP_SWIN);
> + Â Â Â return 0;
> +}
> +
> +static void perf_cgroup_move(struct task_struct *task)
> +{
> + Â Â Â task_function_call(task, __perf_cgroup_move, task);
> +}
> +
> +static void perf_cgroup_attach(struct cgroup_subsys *ss, struct cgroup *cgrp,
> + Â Â Â Â Â Â Â struct cgroup *old_cgrp, struct task_struct *task,
> + Â Â Â Â Â Â Â bool threadgroup)
> +{
> + Â Â Â perf_cgroup_move(task);
> + Â Â Â if (threadgroup) {
> + Â Â Â Â Â Â Â struct task_struct *c;
> + Â Â Â Â Â Â Â rcu_read_lock();
> + Â Â Â Â Â Â Â list_for_each_entry_rcu(c, &task->thread_group, thread_group) {
> + Â Â Â Â Â Â Â Â Â Â Â perf_cgroup_move(c);
> + Â Â Â Â Â Â Â }
> + Â Â Â Â Â Â Â rcu_read_unlock();
> + Â Â Â }
> +}
> +
I suspect my original patch was not necessarily handling the attach completely
when you move an existing task into a cgroup which was already monitored.
I think you may have had to wait until a ctxsw. Looks like this callback handles
this better.

Let me make sure I understand the threadgroup iteration, though. I suspect
this handles the situation where a multi-threaded app is moved into a cgroup
while there is already cgroup monitoring active. In that case and if we do not
want to wait until there is at least one ctxsw on all CPUs, then we have to
check if the other threads are not already running on the other CPUs.If so,
we need to do a cgroup switch on those CPUs. Otherwise, we have nothing to
do. Am I getting this right?

> +static void perf_cgroup_exit(struct cgroup_subsys *ss, struct cgroup *cgrp,
> + Â Â Â Â Â Â Â struct cgroup *old_cgrp, struct task_struct *task)
> +{
> + Â Â Â /*
> + Â Â Â Â* cgroup_exit() is called in the copy_process() failure path.
> + Â Â Â Â* Ignore this case since the task hasn't ran yet, this avoids
> + Â Â Â Â* trying to poke a half freed task state from generic code.
> + Â Â Â Â*/
> + Â Â Â if (!(task->flags & PF_EXITING))
> + Â Â Â Â Â Â Â return;
> +
> + Â Â Â perf_cgroup_move(task);
> +}
> +
Those callbacks looks good to me. They certainly alleviate the need for the
hack in cgorup_exit().

Thanks for fixing this.

> +struct cgroup_subsys perf_subsys = {
> + Â Â Â .name = "perf_event",
> + Â Â Â .subsys_id = perf_subsys_id,
> + Â Â Â .create = perf_cgroup_create,
> + Â Â Â .destroy = perf_cgroup_destroy,
> + Â Â Â .exit = perf_cgroup_exit,
> + Â Â Â .attach = perf_cgroup_attach,
> +};
> +#endif /* CONFIG_CGROUP_PERF */
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/