Re: [RFC PATCH] perf: Store relevant events in a hlist
From: Peter Zijlstra
Date: Wed Mar 10 2010 - 15:47:10 EST
On Wed, 2010-03-10 at 21:33 +0100, Frederic Weisbecker wrote:
> On Wed, Mar 10, 2010 at 08:34:52PM +0100, Peter Zijlstra wrote:
> > I'm not quite sure why you need the node thing, you already have a
> > hash-bucket to iterate, simply stick all events into the one bucket and
> > walk through it with a filter and process all events that match.
>
>
> This inter level of indirection was one of my heaviest hesitations.
> In case we have a hash collision, I just wanted to ensure we keep
> an amortized O(n) in any case, that at the cost of this level of
> indirection. Plus that removed the config:id check in every events,
> as the check is made only once.
>
> That said I guess we can indeed remove that and have the events
> directly in the hash bucket. Assuming we deal well to avoid
> collisions, it should be fine.
Right, lets start simple and go from there.
> > As to all those for_each_online_cpu() thingies, it might make sense to
> > also have a global hash-table for events active on all cpus,... hmm was
> > that the reason for the node thing, one event cannot be in multiple
> > buckets?
>
>
> There are several reasons I've made it per cpu.
> Assuming we have a global hash table for wide events, it means we'll
> have some cache dance each time an event is disabled/enabled (which
> is quite often as wide events are per task, even worst if the initial task
> has numerous threads that have this event duplicated). Also, as wide
> events mean per task, the event will always be active in one cpu at
> a time, it would be wasteful to check it on other cpus.
Thing is, most events generated by perf are per cpu, even the per task
ones, if they are machine wide the hash table bounces aren't the biggest
problem.
But yeah..
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/