On 7/23/2018 12:21 PM, Peter Zijlstra wrote:
On Mon, Jul 23, 2018 at 04:59:44PM +0200, Peter Zijlstra wrote:
On Thu, Mar 08, 2018 at 06:15:41PM -0800, kan.liang@xxxxxxxxxxxxxxx wrote:diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index ef47a418d819..86149b87cce8 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -2280,7 +2280,10 @@ static int intel_pmu_handle_irq(struct pt_regs *regs)
ÂÂÂÂÂÂ * counters from the GLOBAL_STATUS mask and we always process PEBS
ÂÂÂÂÂÂ * events via drain_pebs().
ÂÂÂÂÂÂ */
-ÂÂÂ status &= ~(cpuc->pebs_enabled & PEBS_COUNTER_MASK);
+ÂÂÂ if (x86_pmu.flags & PMU_FL_PEBS_ALL)
+ÂÂÂÂÂÂÂ status &= ~(cpuc->pebs_enabled & EXTENDED_PEBS_COUNTER_MASK);
+ÂÂÂ else
+ÂÂÂÂÂÂÂ status &= ~(cpuc->pebs_enabled & PEBS_COUNTER_MASK);
ÂÂÂÂÂ /*
ÂÂÂÂÂÂ * PEBS overflow sets bit 62 in the global status register
Doesn't this re-introduce the problem fixed in commit fd583ad1563be,
where pebs_enabled:32-34 are PEBS Load Latency, instead of fixed
counters?
Also, since they 'fixed' that conflict, the PEBS_ALL version could be:
ÂÂÂÂstate &= cpuc->pebs_enabled;
Right?