Re: [PATCH] perf_events: improve Intel event scheduling

From: stephane eranian
Date: Fri Dec 11 2009 - 06:59:27 EST


>
>>> @@ -1120,9 +1136,15 @@ static void amd_pmu_disable_all(void)
>>>
>>> Âvoid hw_perf_disable(void)
>>> Â{
>>> + Â Â struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
>>> +
>>> Â Â Â if (!x86_pmu_initialized())
>>> Â Â Â Â Â Â Â return;
>>> - Â Â return x86_pmu.disable_all();
>>> +
>>> + Â Â if (cpuc->enabled)
>>> + Â Â Â Â Â Â cpuc->n_events = 0;
>>> +
>>> + Â Â x86_pmu.disable_all();
>>> Â}
>>
>> Right, so I cannot directly see the above being correct. You fully erase
>> the PMU state when we disable it, but things like single counter
>> add/remove doesn't reprogram the full PMU afaict.
>>

>>> +int hw_perf_group_sched_in(struct perf_event *leader,
>>> + Â Â Â Â Â Âstruct perf_cpu_context *cpuctx,
>>> + Â Â Â Â Â Âstruct perf_event_context *ctx, int cpu)
>>> +{
>>> + Â Â struct cpu_hw_events *cpuhw = &per_cpu(cpu_hw_events, cpu);
>>> + Â Â int n, ret;
>>> +
>>> + Â Â n = collect_events(cpuhw, leader);
>>> + Â Â if (n < 0)
>>> + Â Â Â Â Â Â return n;
>>> +
>>> + Â Â ret = schedule_events(cpuhw, n, true);
>>> + Â Â if (ret)
>>> + Â Â Â Â Â Â return ret;
>>> +
>>> + Â Â /* 0 means successful and enable each event in caller */
>>> + Â Â return 0;
>>> +}
>>
>> This is where powerpc does n_added += n, and it delays the
>> schedule_events() bit to hw_perf_enable() conditional on n_added > 0.
>> When you remove events it simply keeps the current layout and disables
>> the one.
>>
There is a major difference between PPC and X86 here. PPC has a centralized
register to control start/stop. This register uses bitmask to enable
or disable counters. Thus, in hw_perf_enable(), if n_added=0, then you
just need to
use the pre-computed bitmask. Otherwise, you need to recompute the bitmask to
include the new registers. The assignment of events and validation is done in
hw_group_sched_in().

In X86, assignment and validation is done in hw_group_sched_in(). Activation is
done individually for each counter. There is no centralized register
used here, thus
no bitmask to update.

Disabling a counter does not trigger a complete reschedule of events.
This happens
only when hw_group_sched_in() is called.

The n_events = 0 in hw_perf_disable() is used to signal that something
is changing.
It should not be here but here. The problem is that
hw_group_sched_in() needs a way
to know that it is called for a completely new series of group
scheduling so it can
discard any previous assignment. This goes back to the issue I raised
in my previous
email. You could add a parameter to hw_group_sched_in() that would
indicate this is
the first group. that would cause n_events =0 and the function would
start accumulating
events for the new scheduling period.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/