Re: [PATCH v2] perf cgroup: simplify arguments if track multiple events for a cgroup
From: Arnaldo Carvalho de Melo
Date: Thu Feb 22 2018 - 08:02:15 EST
Em Thu, Feb 22, 2018 at 06:34:08PM +0800, Weiping Zhang escreveu:
> 2018-01-31 17:22 GMT+08:00 Jiri Olsa <jolsa@xxxxxxxxxx>:
> > On Mon, Jan 29, 2018 at 11:48:09PM +0800, weiping zhang wrote:
> >> if use -G with one cgroup and -e with multiple events, only the first
> >> event has correct cgroup setting, all events from the second will track
> >> system-wide events.
> >>
> >> if user want track multiple events for a specific cgroup, user must give
> >> parameters like follow:
> >> $ perf stat -e e1 -e e2 -e e3 -G test,test,test
> >> this patch simplify this case, just type one cgroup, like following:
> >> $ perf stat -e e1 -e e2 -e e3 -G test
> >>
> >> $ mkdir -p /sys/fs/cgroup/perf_event/test
> >> $ perf stat -e cycles -e cache-misses -a -I 1000 -G test
> >>
> >> before:
> >> 1.001007226 <not counted> cycles test
> >> 1.001007226 7,506 cache-misses
> >>
> >> after:
> >> 1.000834097 <not counted> cycles test
> >> 1.000834097 <not counted> cache-misses test
> >>
> >> Signed-off-by: weiping zhang <zhangweiping@xxxxxxxxxxxxxxx>
> >
> > Acked-by: Jiri Olsa <jolsa@xxxxxxxxxx>
>
> Hi Arnaldo,
Ok, tested and applied an example for when wanting to monitor for an
specific cgroup and also for system wide:
----
If wanting to monitor, say, 'cycles' for a cgroup and also for system wide, this
command line can be used: 'perf stat -e cycles -G cgroup_name -a -e cycles'.
----
To further clarify what is in the man page already about -G affecting
only the previously defined events in the command line.
Perhaps it would be interesting to automatically detect that the same
event is being read system wide and for an specific cgroup and then,
right after the count for specific cgroups show the percentage?
Thanks,
- Arnaldo
[root@jouet ~]# mkdir -p /sys/fs/cgroup/perf_event/empty_cgroup
[root@jouet ~]# perf stat -e cycles -I 1000 -G empty_cgroup -a -e cycles
# time counts unit events
1.000268091 <not counted> cycles empty_cgroup
1.000268091 73,159,886 cycles
2.000748319 <not counted> cycles empty_cgroup
2.000748319 70,189,470 cycles
3.001196694 <not counted> cycles empty_cgroup
3.001196694 57,076,551 cycles
4.001589957 <not counted> cycles empty_cgroup
4.001589957 102,118,895 cycles
5.002017548 <not counted> cycles empty_cgroup
5.002017548 66,391,232 cycles
^C 5.598699824 <not counted> cycles empty_cgroup
5.598699824 136,313,588 cycles
[root@jouet ~]#
- Arnaldo