Re: [PATCH] perf_events: improve Intel event scheduling
From: Stephane Eranian
Date: Tue Dec 29 2009 - 09:47:49 EST
Paul,
So if I understand what both of you are saying, it seems that
event scheduling has to take place in the pmu->enable() callback
which is per-event.
In the case of X86, you can chose to do a best-effort scheduling,
i.e., only assign
the new event if there is a compatible free counter. That would be incremental.
But the better solution would be to re-examine the whole situation and
potentially
move existing enabled events around to free a counter if the new event is more
constrained. That would require stopping the PMU, rewriting config and
data registers
and re-enabling the PMU. This latter solution is the only possibility
to avoid ordering
side effects, i.e., the assignment of events to counters depends on
the order in which
events are created (or enabled).
The register can be considered freed by pmu->disable() if scheduling takes place
in pmu->enable().
>From what Paul was saying about hw_perf_group_sched_in(), it seems like this
function can be used to check if a new group is compatible with the existing
enabled events. Compatible means that there is a possible assignment of
events to counters.
As for the n_added logic, it seems like perf_disable() resets n_added to zero.
N_added is incremented in pmu->enable(), i.e., add one event, or the
hw_perf_group_sched_in(), i.e., add a whole group. Scheduling is based on
n_events. The point of n_added is to verify whether something needs to be
done, i.e., event scheduling, if an event or group was added between
perf_disable()
and perf_enable(). In pmu->disable(), the list of enabled events is
compacted and
n_events is decremented.
Did I get this right?
All the enable and disable calls can be called from NMI interrupt context
and thus must be very careful with locks.
On Tue, Dec 22, 2009 at 2:02 AM, Paul Mackerras <paulus@xxxxxxxxx> wrote:
> On Mon, Dec 21, 2009 at 04:40:40PM +0100, Peter Zijlstra wrote:
>
>> I'm not really seeing the problem here...
>>
>>
>> Âperf_disable() <-- shut down the full pmu
>>
>> Âpmu->disable() <-- hey someone got removed (easy free the reg)
>> Âpmu->enable() Â<-- hey someone got added (harder, check constraints)
>>
>> Âhw_perf_group_sched_in() <-- hey a full group got added
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â (better than multiple ->enable)
>>
>> Âperf_enable() <-- re-enable pmu
>>
>>
>> So ->disable() is used to track freeing, ->enable is used to add
>> individual counters, check constraints etc..
>>
>> hw_perf_group_sched_in() is used to optimize the full group enable.
>>
>> Afaict that is what power does (Paul?) and that should I think be
>> sufficient to track x86 as well.
>
> That sounds right to me.
>
>> Since sched_in() is balanced with sched_out(), the ->disable() calls
>> should provide the required information as to the occupation of the pmu.
>> I don't see the need for more hooks.
>>
>> Paul, could you comment, since you did all this for power?
>
> On powerpc we maintain a list of currently enabled events in the arch
> code. ÂDoes x86 do that as well?
>
> If you have the list (or array) of events easily accessible, it's
> relatively easy to check whether the whole set is feasible at any
> point, without worrying about which events were recently added. ÂThe
> perf_event structure has a spot where the arch code can store which
> PMU register is used for that event, so you can easily optimize the
> case where the event doesn't move.
>
> Like you, I'm not seeing where the difficulty lies. ÂPerhaps Stephane
> could give us a detailed example if he still thinks there's a
> difficulty.
>
> Paul.
>
--
Stephane Eranian | EMEA Software Engineering
Google France | 38 avenue de l'OpÃra | 75002 Paris
Tel : +33 (0) 1 42 68 53 00
This email may be confidential or privileged. If you received this
communication by mistake, please
don't forward it to anyone else, please erase all copies and
attachments, and please let me know that
it went to the wrong person. Thanks
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/