Re: [PATCH v7 8/8] perf tool: add cgroup identifier entry in perf report

From: Jiri Olsa
Date: Wed Feb 22 2017 - 11:48:34 EST


On Tue, Feb 21, 2017 at 07:33:13PM +0530, Hari Bathini wrote:
> This patch introduces a cgroup identifier entry field in perf report to
> identify or distinguish data of different cgroups. It uses the device
> number and inode number of cgroup namespace, included in perf data with
> the new PERF_RECORD_NAMESPACES event, as cgroup identifier. With the
> assumption that each container is created with it's own cgroup namespace,
> this allows assessment/analysis of multiple containers at once.
>
> Shown below is the output of perf report, sorted based on cgroup id, on
> a system that was running three containers at the time of perf record
> and clearly showing one of the containers' considerable use of kernel
> memory in comparison with others:
>
>
> $ perf report -s cgroup_id,sample --stdio
> #
> # Total Lost Samples: 0
> #
> # Samples: 16K of event 'kmem:kmalloc'
> # Event count (approx.): 16043
> #
> # Overhead cgroup id (dev/inode) Samples
> # ........ ..................... ............
> #
> 96.33% 3/0xf00000d0 15454
> 3.02% 3/0xeffffffb 485
> 0.31% 3/0xf00000ce 49
> 0.29% 3/0xf00000cf 47
> 0.05% 0/0x0 8
>
> While this is a start, there is further scope of improving this. For
> example, instead of cgroup namespace's device and inode numbers, dev
> and inode numbers of some or all namespaces may be used to distinguish
> which processes are running in a given container context. Also, scripts
> to map device and inode info to containers sounds plausible for better
> tracing of containers.
>
> Signed-off-by: Hari Bathini <hbathini@xxxxxxxxxxxxxxxxxx>
> ---
> tools/perf/util/hist.c | 7 +++++++
> tools/perf/util/hist.h | 1 +
> tools/perf/util/sort.c | 41 +++++++++++++++++++++++++++++++++++++++++
> tools/perf/util/sort.h | 7 +++++++
> 4 files changed, 56 insertions(+)

missing documentation update with new sorting field...

other than that the rest looks ok to me, for the patchset:

Acked-by: Jiri Olsa <jolsa@xxxxxxxxxx>

thanks,
jirka