Re: [Patch v8 06/12] perf/x86/intel/ds: Factor out PEBS record processing code to functions
From: Peter Zijlstra
Date: Wed Oct 22 2025 - 07:49:27 EST
On Wed, Oct 15, 2025 at 02:44:16PM +0800, Dapeng Mi wrote:
> Beside some PEBS record layout difference, arch-PEBS can share most of
> PEBS record processing code with adaptive PEBS. Thus, factor out these
> common processing code to independent inline functions, so they can be
> reused by subsequent arch-PEBS handler.
>
> Suggested-by: Kan Liang <kan.liang@xxxxxxxxxxxxxxx>
> Signed-off-by: Dapeng Mi <dapeng1.mi@xxxxxxxxxxxxxxx>
> ---
> arch/x86/events/intel/ds.c | 101 ++++++++++++++++++++++++-------------
> 1 file changed, 66 insertions(+), 35 deletions(-)
>
> diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
> index a80881a20321..41acbf0a11c9 100644
> --- a/arch/x86/events/intel/ds.c
> +++ b/arch/x86/events/intel/ds.c
> @@ -2629,6 +2629,64 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs, struct perf_sample_d
> }
> }
>
> +static inline void __intel_pmu_handle_pebs_record(struct pt_regs *iregs,
> + struct pt_regs *regs,
> + struct perf_sample_data *data,
> + void *at, u64 pebs_status,
> + struct perf_event *events[],
> + short *counts, void **last,
> + setup_fn setup_sample)
> +{
> + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
> + struct perf_event *event;
> + int bit;
> +
> + for_each_set_bit(bit, (unsigned long *)&pebs_status, X86_PMC_IDX_MAX) {
> + event = cpuc->events[bit];
> +
> + if (WARN_ON_ONCE(!event) ||
> + WARN_ON_ONCE(!event->attr.precise_ip))
> + continue;
> +
> + if (counts[bit]++)
> + __intel_pmu_pebs_event(event, iregs, regs, data,
> + last[bit], setup_sample);
> +
> + last[bit] = at;
> + /*
> + * perf_event_overflow() called by below __intel_pmu_pebs_last_event()
> + * could trigger interrupt throttle and clear all event pointers of
> + * the group in cpuc->events[] to NULL. So snapshot the event[] before
> + * it could be cleared. This avoids the possible NULL event pointer
> + * access and PEBS record loss.
> + */
> + if (counts[bit] && !events[bit])
> + events[bit] = cpuc->events[bit];
> + }
> +}
> +
> +static inline void
> +__intel_pmu_handle_last_pebs_record(struct pt_regs *iregs, struct pt_regs *regs,
> + struct perf_sample_data *data, u64 mask,
> + struct perf_event *events[],
> + short *counts, void **last,
> + setup_fn setup_sample)
> +{
> + struct perf_event *event;
> + int bit;
> +
> + for_each_set_bit(bit, (unsigned long *)&mask, X86_PMC_IDX_MAX) {
> + if (!counts[bit])
> + continue;
> +
> + event = events[bit];
> +
> + __intel_pmu_pebs_last_event(event, iregs, regs, data, last[bit],
> + counts[bit], setup_sample);
> + }
> +
> +}
These need to be __always_inline, like the other functions that take
setup_fn. Otherwise the compiler might decide to not inline and then it
can't constant propagate this function and we get indirect calls.