Re: [PATCH][RFC] Adding information of counts processes acquiredhow many spinlocks to schedstat

From: Peter Zijlstra
Date: Fri Jul 10 2009 - 08:52:37 EST


On Fri, 2009-07-10 at 21:45 +0900, mitake@xxxxxxxxxxxxxxxxxxxxx wrote:
> From: Andi Kleen <andi@xxxxxxxxxxxxxx>
> Subject: Re: [PATCH][RFC] Adding information of counts processes acquired how many spinlocks to schedstat
> Date: Mon, 6 Jul 2009 13:54:51 +0200
>
> Thank you for your replying, Peter and Andi.
>
> > > Maybe re-use the LOCK_CONTENDED macros for this, but I'm not sure we
> > > want to go there and put code like this on the lock hot-paths for !debug
> > > kernels.
> >
> > My concern was similar.
> >
> > I suspect it would be in theory ok for the slow spinning path, but I am
> > somewhat concerned about the additional cache miss for checking
> > the global flag even in this case. This could hurt when
> > the kernel is running fully cache hold, in that the cache miss
> > might be far more expensive that short spin.
>
> Yes, there will be overhead. This is certain.
> But there's the radical way to ignore this,
> adding subcategory to Kconfig for measuring spinlocks and #ifdef to spinlock.c.
> So people who wants to avoid this overhead can disable measurement of spinlocks completely.
>
> And there's another way to avoid the overhead of measurement.
> Making _spin_lock variable of function pointer.
> When you don't want to measure spinlocks,
> assign _spin_lock_raw() which is equals to current _spin_lock().
> When you want to measure spinlocks,
> assign _spin_lock_perf() which locks and measures.
> This way will banish the cache miss problem you said.
> I think this may be useful for avoiding problem of recursion.

We already have that, its called CONFIG_LOCKDEP && CONFIG_EVENT_TRACING
&& CONFIG_EVENT_PROFILE, with those enabled you get tracepoints on every
lock acquire and lock release, and perf can already use those as event
sources.



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/