Re: [PATCH] perf_events: improve x86 event scheduling (v5)
From: Stephane Eranian
Date: Thu Jan 21 2010 - 05:39:10 EST
On Thu, Jan 21, 2010 at 11:28 AM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> On Thu, 2010-01-21 at 11:21 +0100, Stephane Eranian wrote:
>> Are you suggesting a speculative approach where you first try simply
>> accumulate then schedule and if this fails, then restart the whole
>> loop but this time adding and scheduling each event individually?
>> For groups, you'd have to fail the group if one of its events fails.
> No, I'm only talking about groups. The complaint from frederic was that
> current hw_perf_group_sched_in() implementations have to basically
> replicate all of the group_sched_in() and event_sched_in() stuff, which
> seems wasteful.
I agree about the replication problem. But this comes from the fact that
if you call hw_perf_group_sched_in() and you succeed, you want to only
execute part of what group_sched_in() normally does, namely mark the
event as active, update timing but skip event_sched_in() stuff, incl. enable().
Actual activation is deferred until perf_enable() which avoids having some
events actually measuring while you are still scheduling events.
> So I was thinking of an alternative interface that would give the same
> end result but not as much code replication.
> I'm now leaning towards adding a parameter to ->enable() to postpone
> schedulability and add a hw_perf_validate() like call.
> With that I'm also looking at what would be the sanest way to multiplex
> all the current weak hw_perf* functions in the light of multiple pmu
Stephane Eranian | EMEA Software Engineering
Google France | 38 avenue de l'OpÃra | 75002 Paris
Tel : +33 (0) 1 42 68 53 00
This email may be confidential or privileged. If you received this
communication by mistake, please
don't forward it to anyone else, please erase all copies and
attachments, and please let me know that
it went to the wrong person. Thanks
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/