Re: [PATCH v9] perf stat: Fix wrong skipping for per-die aggregation

From: Arnaldo Carvalho de Melo
Date: Wed Mar 03 2021 - 13:25:10 EST


Em Sun, Jan 31, 2021 at 11:27:08PM +0100, Jiri Olsa escreveu:
> On Thu, Jan 28, 2021 at 09:34:17AM +0800, Jin Yao wrote:
> > Uncore becomes die-scope on Xeon Cascade Lake-AP and perf has supported
> > --per-die aggregation yet.
> >
> > One issue is found in check_per_pkg() for uncore events running on
> > AP system. On cascade Lake-AP, we have:
> >
> > S0-D0
> > S0-D1
> > S1-D0
> > S1-D1
> >
> > But in check_per_pkg(), S0-D1 and S1-D1 are skipped because the
> > mask bits for S0 and S1 have been set for S0-D0 and S1-D0. It doesn't
> > check die_id. So the counting for S0-D1 and S1-D1 are set to zero.
> > That's not correct.
> >
> > root@lkp-csl-2ap4 ~# ./perf stat -a -I 1000 -e llc_misses.mem_read --per-die -- sleep 5
> > 1.001460963 S0-D0 1 1317376 Bytes llc_misses.mem_read
> > 1.001460963 S0-D1 1 998016 Bytes llc_misses.mem_read
> > 1.001460963 S1-D0 1 970496 Bytes llc_misses.mem_read
> > 1.001460963 S1-D1 1 1291264 Bytes llc_misses.mem_read
> > 2.003488021 S0-D0 1 1082048 Bytes llc_misses.mem_read
> > 2.003488021 S0-D1 1 1919040 Bytes llc_misses.mem_read
> > 2.003488021 S1-D0 1 890752 Bytes llc_misses.mem_read
> > 2.003488021 S1-D1 1 2380800 Bytes llc_misses.mem_read
> > 3.005613270 S0-D0 1 1126080 Bytes llc_misses.mem_read
> > 3.005613270 S0-D1 1 2898176 Bytes llc_misses.mem_read
> > 3.005613270 S1-D0 1 870912 Bytes llc_misses.mem_read
> > 3.005613270 S1-D1 1 3388608 Bytes llc_misses.mem_read
> > 4.007627598 S0-D0 1 1124608 Bytes llc_misses.mem_read
> > 4.007627598 S0-D1 1 3884416 Bytes llc_misses.mem_read
> > 4.007627598 S1-D0 1 921088 Bytes llc_misses.mem_read
> > 4.007627598 S1-D1 1 4451840 Bytes llc_misses.mem_read
> > 5.001479927 S0-D0 1 963328 Bytes llc_misses.mem_read
> > 5.001479927 S0-D1 1 4831936 Bytes llc_misses.mem_read
> > 5.001479927 S1-D0 1 895104 Bytes llc_misses.mem_read
> > 5.001479927 S1-D1 1 5496640 Bytes llc_misses.mem_read
> >
> > From above output, we can see S0-D1 and S1-D1 don't report the interval
> > values, they are continued to grow. That's because check_per_pkg() wrongly
> > decides to use zero counts for S0-D1 and S1-D1.
> >
> > So in check_per_pkg(), we should use hashmap(socket,die) to decide if
> > the cpu counts needs to skip. Only considering socket is not enough.
> >
> > Now with this patch,
> >
> > root@lkp-csl-2ap4 ~# ./perf stat -a -I 1000 -e llc_misses.mem_read --per-die -- sleep 5
> > 1.001586691 S0-D0 1 1229440 Bytes llc_misses.mem_read
> > 1.001586691 S0-D1 1 976832 Bytes llc_misses.mem_read
> > 1.001586691 S1-D0 1 938304 Bytes llc_misses.mem_read
> > 1.001586691 S1-D1 1 1227328 Bytes llc_misses.mem_read
> > 2.003776312 S0-D0 1 1586752 Bytes llc_misses.mem_read
> > 2.003776312 S0-D1 1 875392 Bytes llc_misses.mem_read
> > 2.003776312 S1-D0 1 855616 Bytes llc_misses.mem_read
> > 2.003776312 S1-D1 1 949376 Bytes llc_misses.mem_read
> > 3.006512788 S0-D0 1 1338880 Bytes llc_misses.mem_read
> > 3.006512788 S0-D1 1 920064 Bytes llc_misses.mem_read
> > 3.006512788 S1-D0 1 877184 Bytes llc_misses.mem_read
> > 3.006512788 S1-D1 1 1020736 Bytes llc_misses.mem_read
> > 4.008895291 S0-D0 1 926592 Bytes llc_misses.mem_read
> > 4.008895291 S0-D1 1 906368 Bytes llc_misses.mem_read
> > 4.008895291 S1-D0 1 892224 Bytes llc_misses.mem_read
> > 4.008895291 S1-D1 1 987712 Bytes llc_misses.mem_read
> > 5.001590993 S0-D0 1 962624 Bytes llc_misses.mem_read
> > 5.001590993 S0-D1 1 912512 Bytes llc_misses.mem_read
> > 5.001590993 S1-D0 1 891200 Bytes llc_misses.mem_read
> > 5.001590993 S1-D1 1 978432 Bytes llc_misses.mem_read
> >
> > On no-die system, die_id is 0, actually it's hashmap(socket,0), original behavior
> > is not changed.
> >
> > Reported-by: Huang Ying <ying.huang@xxxxxxxxx>
> > Signed-off-by: Jin Yao <yao.jin@xxxxxxxxxxxxxxx>
> > ---
> > v9:
> > Rename zero_per_pkg to evsel__zero_per_pkg and move it to evsel.c. Then
> > evsel__zero_per_pkg can be called under different code path.
> >
> > Call evsel__zero_per_pkg in evsel__exit().
>
> Acked-by: Jiri Olsa <jolsa@xxxxxxxxxx>

Thanks, applied.

- Arnaldo