Re: [Patch v6 01/22] perf/x86/intel: Restrict PEBS_ENABLE writes to PEBS-capable counters

From: Mi, Dapeng

Date: Wed Feb 11 2026 - 00:51:30 EST



On 2/10/2026 11:36 PM, Peter Zijlstra wrote:
> On Mon, Feb 09, 2026 at 03:20:26PM +0800, Dapeng Mi wrote:
>> Before the introduction of extended PEBS, PEBS supported only
>> general-purpose (GP) counters. In a virtual machine (VM) environment,
>> the PEBS_BASELINE bit in PERF_CAPABILITIES may not be set, but the PEBS
>> format could be indicated as 4 or higher. In such cases, PEBS events
>> might be scheduled to fixed counters, and writing the corresponding bits
>> into the PEBS_ENABLE MSR could cause a #GP fault.
>>
>> To prevent writing unsupported bits into the PEBS_ENABLE MSR, ensure
>> cpuc->pebs_enabled aligns with x86_pmu.pebs_capable and restrict the
>> writes to only PEBS-capable counter bits.
> This seems very wrong. Should we not avoid getting those bits set in the
> first place?

Hmm, yes. I originally thought it's fine to block the access these invalid
bits in PEBS_ENABLE MSR, but I agree it should be blocked as early as possible.

Currently the intel_pebs_constraints() helper doesn't check if the returned
matched PEBS constraint contains the fixed counter indexes when extended
PEBS is not supported.

We may need the below change (just build and not tested yet). 

diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index 94ada08360f1..bc36808bdb7b 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -1557,6 +1557,14 @@ struct event_constraint
*intel_pebs_constraints(struct perf_event *event)
        if (pebs_constraints) {
                for_each_event_constraint(c, pebs_constraints) {
                        if (constraint_match(c, event->hw.config)) {
+                               /*
+                                * If fixed counters are suggested in the
constraints,
+                                * but extended PEBS is not supported,
emptyconstraint
+                                * should be returned.
+                                */
+                               if ((c->idxmsk64 & ~PEBS_COUNTER_MASK) &&
+                                   !(x86_pmu.flags & PMU_FL_PEBS_ALL))
+                                       break;
                                event->hw.flags |= c->flags;
                                return c;
                        }

Thanks.


>
> That is; the fact that we set those cpuc->pebs_enabled bits indicates
> that we 'successfully' scheduled PEBS counters. And then we silently
> disable PEBS when programming the hardware.
>
> Or am I reading this wrong?
>
>> Signed-off-by: Dapeng Mi <dapeng1.mi@xxxxxxxxxxxxxxx>
>> ---
>>
>> V6: new patch.
>>
>> arch/x86/events/intel/core.c | 6 ++++--
>> arch/x86/events/intel/ds.c | 11 +++++++----
>> 2 files changed, 11 insertions(+), 6 deletions(-)
>>
>> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
>> index f3ae1f8ee3cd..546ebc7e1624 100644
>> --- a/arch/x86/events/intel/core.c
>> +++ b/arch/x86/events/intel/core.c
>> @@ -3554,8 +3554,10 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status)
>> * cpuc->enabled has been forced to 0 in PMI.
>> * Update the MSR if pebs_enabled is changed.
>> */
>> - if (pebs_enabled != cpuc->pebs_enabled)
>> - wrmsrq(MSR_IA32_PEBS_ENABLE, cpuc->pebs_enabled);
>> + if (pebs_enabled != cpuc->pebs_enabled) {
>> + wrmsrq(MSR_IA32_PEBS_ENABLE,
>> + cpuc->pebs_enabled & x86_pmu.pebs_capable);
>> + }
>>
>> /*
>> * Above PEBS handler (PEBS counters snapshotting) has updated fixed
>> diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
>> index 5027afc97b65..57805c6ba0c3 100644
>> --- a/arch/x86/events/intel/ds.c
>> +++ b/arch/x86/events/intel/ds.c
>> @@ -1963,6 +1963,7 @@ void intel_pmu_pebs_disable(struct perf_event *event)
>> {
>> struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
>> struct hw_perf_event *hwc = &event->hw;
>> + u64 pebs_enabled;
>>
>> __intel_pmu_pebs_disable(event);
>>
>> @@ -1974,16 +1975,18 @@ void intel_pmu_pebs_disable(struct perf_event *event)
>>
>> intel_pmu_pebs_via_pt_disable(event);
>>
>> - if (cpuc->enabled)
>> - wrmsrq(MSR_IA32_PEBS_ENABLE, cpuc->pebs_enabled);
>> + pebs_enabled = cpuc->pebs_enabled & x86_pmu.pebs_capable;
>> + if (pebs_enabled)
>> + wrmsrq(MSR_IA32_PEBS_ENABLE, pebs_enabled);
>> }
>>
>> void intel_pmu_pebs_enable_all(void)
>> {
>> struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
>> + u64 pebs_enabled = cpuc->pebs_enabled & x86_pmu.pebs_capable;
>>
>> - if (cpuc->pebs_enabled)
>> - wrmsrq(MSR_IA32_PEBS_ENABLE, cpuc->pebs_enabled);
>> + if (pebs_enabled)
>> + wrmsrq(MSR_IA32_PEBS_ENABLE, pebs_enabled);
>> }
>>
>> void intel_pmu_pebs_disable_all(void)
>> --
>> 2.34.1
>>