Re: per-cpu operation madness vs validation

From: Peter Zijlstra
Date: Wed Jul 27 2011 - 12:29:38 EST


On Wed, 2011-07-27 at 18:20 +0200, Peter Zijlstra wrote:
> > > get_cpu_var()/put_cpu_var() were supposed to provide such delineation as
> > > well, but you've been actively destroying things like that with the
> > > recent per-cpu work.
> >
> > The per cpu work is *not* focused on sections that access per cpu data, so
> > how could it destroy that? Nothing is changed there so far. The this_cpu
> > ops are introducing per cpu atomic operations that are safe and cheap
> > regardless of the execution context. The primary initial motivation was
> > incrementing per cpu counters without having to disabling interrupts
> > and/or preemption and it grew from there.
>
> I think you need to look at 20b876918c065818b3574a426d418f68b4f8ad19 and
> try again. You removed get_cpu_var()/put_cpu_var() and replaced it with
> naked preempt_disable()/preempt_enable(). That's loosing information
> right there.

Also things like the below hunk are just plain ugly and obfuscate the
code to safe one load at best. I'm sorely tempted to revert such crap.

@@ -1468,14 +1465,12 @@ static void x86_pmu_start_txn(struct pmu *pmu)
*/
static void x86_pmu_cancel_txn(struct pmu *pmu)
{
- struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
-
- cpuc->group_flag &= ~PERF_EVENT_TXN;
+ __this_cpu_and(cpu_hw_events.group_flag, ~PERF_EVENT_TXN);
/*
* Truncate the collected events.
*/
- cpuc->n_added -= cpuc->n_txn;
- cpuc->n_events -= cpuc->n_txn;
+ __this_cpu_sub(cpu_hw_events.n_added, __this_cpu_read(cpu_hw_events.n_txn));
+ __this_cpu_sub(cpu_hw_events.n_events, __this_cpu_read(cpu_hw_events.n_txn));
perf_pmu_enable(pmu);
}

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/