RE: perf: fuzzer triggered warning in intel_pmu_drain_pebs_nhm()

From: Liang, Kan
Date: Fri Jul 03 2015 - 16:08:52 EST



>
> I've not yet tried to reproduce, but the below could explain things.
>
> On disabling an event we first clear our cpuc->pebs_enabled bits, only to
> then check them to see if there are any set, and if so, drain the buffer.
>
> If we just cleared the last bit, we'll fail to drain the buffer.
>
> If we then program another event on that counter and another PEBS event,
> we can hit the above WARN with the 'stale' entries left over from the
> previous event.
>
> ---
> arch/x86/kernel/cpu/perf_event_intel_ds.c | 16 ++++++++--------
> 1 file changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/perf_event_intel_ds.c
> b/arch/x86/kernel/cpu/perf_event_intel_ds.c
> index 71fc40238843..041a30ba5654 100644
> --- a/arch/x86/kernel/cpu/perf_event_intel_ds.c
> +++ b/arch/x86/kernel/cpu/perf_event_intel_ds.c
> @@ -548,7 +548,7 @@ int intel_pmu_drain_bts_buffer(void)
>
> static inline void intel_pmu_drain_pebs_buffer(void) {
> - struct pt_regs regs;
> + struct pt_regs regs; /* SAMPLE_REGS_INTR must not be set for
> +FREERUNNING */
>
> x86_pmu.drain_pebs(&regs);
> }
> @@ -755,13 +755,6 @@ void intel_pmu_pebs_disable(struct perf_event
> *event)
> struct hw_perf_event *hwc = &event->hw;
> struct debug_store *ds = cpuc->ds;
>
> - cpuc->pebs_enabled &= ~(1ULL << hwc->idx);
> -
> - if (event->hw.flags & PERF_X86_EVENT_PEBS_LDLAT)
> - cpuc->pebs_enabled &= ~(1ULL << (hwc->idx + 32));
> - else if (event->hw.flags & PERF_X86_EVENT_PEBS_ST)
> - cpuc->pebs_enabled &= ~(1ULL << 63);
> -
> if (ds->pebs_interrupt_threshold >
> ds->pebs_buffer_base + x86_pmu.pebs_record_size) {
> intel_pmu_drain_pebs_buffer();
> @@ -769,6 +762,13 @@ void intel_pmu_pebs_disable(struct perf_event
> *event)
> perf_sched_cb_dec(event->ctx->pmu);
> }
>
> + cpuc->pebs_enabled &= ~(1ULL << hwc->idx);
> +
> + if (event->hw.flags & PERF_X86_EVENT_PEBS_LDLAT)
> + cpuc->pebs_enabled &= ~(1ULL << (hwc->idx + 32));
> + else if (event->hw.flags & PERF_X86_EVENT_PEBS_ST)
> + cpuc->pebs_enabled &= ~(1ULL << 63);
> +
> if (cpuc->enabled)
> wrmsrl(MSR_IA32_PEBS_ENABLE, cpuc->pebs_enabled);
>

If we cleared the last bit, we not only drain the buffer but also decrease
the event->ctx->pmu, which is used to flush the PEBS buffer during
context switches.
We need to disable cpuc->pebs_enabled before changing
event->ctx->pmu as below.


diff --git a/arch/x86/kernel/cpu/perf_event_intel_ds.c
b/arch/x86/kernel/cpu/perf_event_intel_ds.c
index 71fc402..76285c1 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_ds.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_ds.c
@@ -754,6 +754,11 @@ void intel_pmu_pebs_disable(struct
perf_event *event)
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
struct hw_perf_event *hwc = &event->hw;
struct debug_store *ds = cpuc->ds;
+ bool large_pebs = ds->pebs_interrupt_threshold >
+ ds->pebs_buffer_base + x86_pmu.pebs_record_size;
+
+ if (large_pebs)
+ intel_pmu_drain_pebs_buffer();

cpuc->pebs_enabled &= ~(1ULL << hwc->idx);

@@ -762,12 +767,8 @@ void intel_pmu_pebs_disable(struct
perf_event *event)
else if (event->hw.flags & PERF_X86_EVENT_PEBS_ST)
cpuc->pebs_enabled &= ~(1ULL << 63);

- if (ds->pebs_interrupt_threshold >
- ds->pebs_buffer_base + x86_pmu.pebs_record_size) {
- intel_pmu_drain_pebs_buffer();
- if (!pebs_is_enabled(cpuc))
- perf_sched_cb_dec(event->ctx->pmu);
- }
+ if (large_pebs && !pebs_is_enabled(cpuc))
+ perf_sched_cb_dec(event->ctx->pmu);

if (cpuc->enabled)
wrmsrl(MSR_IA32_PEBS_ENABLE, cpuc->pebs_enabled);



Thanks,
Kan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/