[PATCH 2/4] perf/x86: Fix period for non sampling events

From: Jiri Olsa
Date: Wed Dec 28 2016 - 08:31:20 EST

When in counting mode we setup the counter with the
longest possible period and read the value with read

We also still setup the PMI to be triggered when such
counter overflow to reconfigure it.

We also get PEBS interrupt if such counter has precise_ip
set (which makes no sense, but it's possible).

Having such counter with:
- counting mode
- precise_ip set

I watched my server to get stuck serving PEBS interrupt
again and again because of following (AFAICS):

- PEBS interrupt is triggered before PMI
- when PEBS handling path reconfigured counter it
had remaining value of -256
- the x86_perf_event_set_period does not consider this
as an extreme value, so it's configured back as the
new counter value
- this makes the PEBS interrupt to be triggered right
away again
- and because it's non sampling event, this irq storm
is never throttled

Forcing the non sampling events to reconfigure from scratch
is probably not the best solution, but it seems to work.

Signed-off-by: Jiri Olsa <jolsa@xxxxxxxxxx>
arch/x86/events/core.c | 7 +++++++
1 file changed, 7 insertions(+)

diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index f1c22584a46f..657486be9780 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -1116,6 +1116,13 @@ int x86_perf_event_set_period(struct perf_event *event)
return 0;

+ * For non sampling event, we are not interested
+ * in leftover, force the count from beginning.
+ */
+ if (left && !is_sampling_event(event))
+ left = 0;
+ /*
* If we are way outside a reasonable range then just skip forward:
if (unlikely(left <= -period)) {