[PATCH] perf-stat: avoid skew when reading events

From: Mark Rutland
Date: Mon Aug 08 2016 - 07:16:49 EST


When we don't have a tracee (i.e. we're attaching to a task or CPU),
counters can still be running after our workload finishes, and can still
be running as we read their values. As we read events one-by-one, there
can be arbitrary skew between values of events, even within a group.
This means that ratios within an event group are not reliable.

This skew can be seen if measuring a group of identical events, e.g:

# perf stat -a -C0 -e '{cycles,cycles}' sleep 1

To avoid this, we must stop groups from counting before we read the
values of any constituent events. This patch adds and makes use of a new
disable_counters() helper, which disables group leaders (and thus each
group as a whole). This mirrors the use of enable_counters() for
starting event groups in the absence of a tracee.

Closing a group leader splits the group, and without a disabled group
leader the newly split events will begin counting. Thus to ensure counts
are reliable we must defer closing group leaders until all counts have
been read. To do so this patch splits the read_counters() helper into
read_counters() and close_counters(), which also aids legibility.

Signed-off-by: Mark Rutland <mark.rutland@xxxxxxx>
Cc: Alexander Shishkin <alexander.shishkin@xxxxxxxxxxxxxxx>
Cc: Arnaldo Carvalho de Melo <acme@xxxxxxxxxx>
Cc: Ingo Molnar <mingo@xxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: linux-kernel@xxxxxxxxxxxxxxx
---
tools/perf/builtin-stat.c | 39 ++++++++++++++++++++++++++++++++-------
1 file changed, 32 insertions(+), 7 deletions(-)

Note: there is also a kernel-side issue which adds skew to group counts, which
is addressed by [1]. In the absence of [1], this patch minimises skew, but does
not remove it entirely.

Mark.

[1] http://lkml.kernel.org/r/1469553141-28314-1-git-send-email-mark.rutland@xxxxxxx

diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index 0c16d20..e8464b3 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -331,7 +331,7 @@ static int read_counter(struct perf_evsel *counter)
return 0;
}

-static void read_counters(bool close_counters)
+static void read_counters(void)
{
struct perf_evsel *counter;

@@ -341,11 +341,16 @@ static void read_counters(bool close_counters)

if (perf_stat_process_counter(&stat_config, counter))
pr_warning("failed to process counter %s\n", counter->name);
+ }
+}

- if (close_counters) {
- perf_evsel__close_fd(counter, perf_evsel__nr_cpus(counter),
- thread_map__nr(evsel_list->threads));
- }
+static void close_counters(void)
+{
+ struct perf_evsel *counter;
+
+ evlist__for_each(evsel_list, counter) {
+ perf_evsel__close_fd(counter, perf_evsel__nr_cpus(counter),
+ thread_map__nr(evsel_list->threads));
}
}

@@ -353,7 +358,7 @@ static void process_interval(void)
{
struct timespec ts, rs;

- read_counters(false);
+ read_counters();

clock_gettime(CLOCK_MONOTONIC, &ts);
diff_timespec(&rs, &ts, &ref_time);
@@ -380,6 +385,17 @@ static void enable_counters(void)
perf_evlist__enable(evsel_list);
}

+static void disable_counters(void)
+{
+ /*
+ * If we don't have tracee (attaching to task or cpu), counters may
+ * still be running. To get accurate group ratios, we must stop groups
+ * from counting before reading their constituent counters.
+ */
+ if (!target__none(&target))
+ perf_evlist__disable(evsel_list);
+}
+
static volatile int workload_exec_errno;

/*
@@ -657,11 +673,20 @@ try_again:
}
}

+ disable_counters();
+
t1 = rdclock();

update_stats(&walltime_nsecs_stats, t1 - t0);

- read_counters(true);
+ /*
+ * Closing a group leader splits the group, and as we only disable
+ * group leaders, results in remaining events becoming enabled. To
+ * avoid arbitrary skew, we must read all counters before closing any
+ * group leaders.
+ */
+ read_counters();
+ close_counters();

return WEXITSTATUS(status);
}
--
1.9.1