Em Mon, Feb 08, 2021 at 07:25:43AM -0800, kan.liang@xxxxxxxxxxxxxxx escreveu:
From: Jin Yao <yao.jin@xxxxxxxxxxxxxxx>
perf-stat has supported some aggregation modes, such as --per-core,
--per-socket and etc. While for hybrid event, it may only available
on part of cpus. So for --per-core, we need to filter out the
unavailable cores, for --per-socket, filter out the unavailable
sockets, and so on.
Before:
root@otcpl-adl-s-2:~# ./perf stat --per-core -e cpu_core/cycles/ -a -- sleep 1
Performance counter stats for 'system wide':
S0-D0-C0 2 311,114 cycles [cpu_core]
Why not use the pmu style event name, i.e.:
S0-D0-C0 2 311,114 cpu_core/cycles/
?
S0-D0-C4 2 59,784 cycles [cpu_core]
S0-D0-C8 2 121,287 cycles [cpu_core]
S0-D0-C12 2 2,690,245 cycles [cpu_core]
S0-D0-C16 2 2,060,545 cycles [cpu_core]
S0-D0-C20 2 3,632,251 cycles [cpu_core]
S0-D0-C24 2 775,736 cycles [cpu_core]
S0-D0-C28 2 742,020 cycles [cpu_core]
S0-D0-C32 0 <not counted> cycles [cpu_core]
S0-D0-C33 0 <not counted> cycles [cpu_core]
S0-D0-C34 0 <not counted> cycles [cpu_core]
S0-D0-C35 0 <not counted> cycles [cpu_core]
S0-D0-C36 0 <not counted> cycles [cpu_core]
S0-D0-C37 0 <not counted> cycles [cpu_core]
S0-D0-C38 0 <not counted> cycles [cpu_core]
S0-D0-C39 0 <not counted> cycles [cpu_core]
1.001779842 seconds time elapsed
After:
root@otcpl-adl-s-2:~# ./perf stat --per-core -e cpu_core/cycles/ -a -- sleep 1
Performance counter stats for 'system wide':
S0-D0-C0 2 1,088,230 cycles [cpu_core]
S0-D0-C4 2 57,228 cycles [cpu_core]
S0-D0-C8 2 98,327 cycles [cpu_core]
S0-D0-C12 2 2,741,955 cycles [cpu_core]
S0-D0-C16 2 2,090,432 cycles [cpu_core]
S0-D0-C20 2 3,192,108 cycles [cpu_core]
S0-D0-C24 2 2,910,752 cycles [cpu_core]
S0-D0-C28 2 388,696 cycles [cpu_core]
Reviewed-by: Andi Kleen <ak@xxxxxxxxxxxxxxx>
Signed-off-by: Jin Yao <yao.jin@xxxxxxxxxxxxxxx>
---
tools/perf/util/stat-display.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c
index 21a3f80..fa11572 100644
--- a/tools/perf/util/stat-display.c
+++ b/tools/perf/util/stat-display.c
@@ -630,6 +630,20 @@ static void aggr_cb(struct perf_stat_config *config,
}
}
+static bool aggr_id_hybrid_matched(struct perf_stat_config *config,
+ struct evsel *counter, struct aggr_cpu_id id)
+{
+ struct aggr_cpu_id s;
+
+ for (int i = 0; i < evsel__nr_cpus(counter); i++) {
+ s = config->aggr_get_id(config, evsel__cpus(counter), i);
+ if (cpu_map__compare_aggr_cpu_id(s, id))
+ return true;
+ }
+
+ return false;
+}
+
static void print_counter_aggrdata(struct perf_stat_config *config,
struct evsel *counter, int s,
char *prefix, bool metric_only,
@@ -643,6 +657,12 @@ static void print_counter_aggrdata(struct perf_stat_config *config,
double uval;
ad.id = id = config->aggr_map->map[s];
+
+ if (perf_pmu__hybrid_exist() &&
+ !aggr_id_hybrid_matched(config, counter, id)) {
+ return;
+ }
+
ad.val = ad.ena = ad.run = 0;
ad.nr = 0;
if (!collect_data(config, counter, aggr_cb, &ad))
--
2.7.4