Re: [PATCH 1/2] perf/x86/intel: Support adaptive PEBS for fixed counters

From: Peter Zijlstra
Date: Wed Apr 10 2019 - 03:41:55 EST


On Tue, Apr 09, 2019 at 06:09:59PM -0700, kan.liang@xxxxxxxxxxxxxxx wrote:
> From: Kan Liang <kan.liang@xxxxxxxxxxxxxxx>
>
> Fixed counters can also generate adaptive PEBS record, if the
> corresponding bit in IA32_FIXED_CTR_CTRL is set.
> Otherwise, only basic record is generated.
>
> Unconditionally set the bit when PEBS is enabled on fixed counters.
> Let MSR_PEBS_CFG decide which format of PEBS record should be generated.
> There is no harmful to leave the bit set.

I'll merge this back into:

Subject: perf/x86/intel: Support adaptive PEBSv4

such that this bug never existed, ok?

>
> Signed-off-by: Kan Liang <kan.liang@xxxxxxxxxxxxxxx>
> ---
> arch/x86/events/intel/core.c | 5 +++++
> arch/x86/include/asm/perf_event.h | 1 +
> 2 files changed, 6 insertions(+)
>
> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> index 56df0f6..f34d92b 100644
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c
> @@ -2174,6 +2174,11 @@ static void intel_pmu_enable_fixed(struct perf_event *event)
> bits <<= (idx * 4);
> mask = 0xfULL << (idx * 4);
>
> + if (x86_pmu.intel_cap.pebs_baseline && event->attr.precise_ip) {
> + bits |= ICL_FIXED_0_ADAPTIVE << (idx * 4);
> + mask |= ICL_FIXED_0_ADAPTIVE << (idx * 4);
> + }
> +
> rdmsrl(hwc->config_base, ctrl_val);
> ctrl_val &= ~mask;
> ctrl_val |= bits;
> diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
> index dcb8bac..ce0dc88 100644
> --- a/arch/x86/include/asm/perf_event.h
> +++ b/arch/x86/include/asm/perf_event.h
> @@ -33,6 +33,7 @@
> #define HSW_IN_TX (1ULL << 32)
> #define HSW_IN_TX_CHECKPOINTED (1ULL << 33)
> #define ICL_EVENTSEL_ADAPTIVE (1ULL << 34)
> +#define ICL_FIXED_0_ADAPTIVE (1ULL << 32)
>
> #define AMD64_EVENTSEL_INT_CORE_ENABLE (1ULL << 36)
> #define AMD64_EVENTSEL_GUESTONLY (1ULL << 40)
> --
> 2.7.4
>