Re: [RFC PATCH 0/5] Introduce /proc/all/ to gather stats from all processes
From: Andrei Vagin
Date: Thu Aug 13 2020 - 04:03:35 EST
On Wed, Aug 12, 2020 at 10:47:32PM -0600, David Ahern wrote:
> On 8/12/20 1:51 AM, Andrei Vagin wrote:
> >
> > I rebased the task_diag patches on top of v5.8:
> > https://github.com/avagin/linux-task-diag/tree/v5.8-task-diag
>
> Thanks for updating the patches.
>
> >
> > /proc/pid files have three major limitations:
> > * Requires at least three syscalls per process per file
> > open(), read(), close()
> > * Variety of formats, mostly text based
> > The kernel spent time to encode binary data into a text format and
> > then tools like top and ps spent time to decode them back to a binary
> > format.
> > * Sometimes slow due to extra attributes
> > For example, /proc/PID/smaps contains a lot of useful informations
> > about memory mappings and memory consumption for each of them. But
> > even if we don't need memory consumption fields, the kernel will
> > spend time to collect this information.
>
> that's what I recall as well.
>
> >
> > More details and numbers are in this article:
> > https://avagin.github.io/how-fast-is-procfs
> >
> > This new interface doesn't have only one of these limitations, but
> > task_diag doesn't have all of them.
> >
> > And I compared how fast each of these interfaces:
> >
> > The test environment:
> > CPU: Intel(R) Core(TM) i5-6300U CPU @ 2.40GHz
> > RAM: 16GB
> > kernel: v5.8 with task_diag and /proc/all patches.
> > 100K processes:
> > $ ps ax | wc -l
> > 10228
>
> 100k processes but showing 10k here??
I'm sure that one zero has been escaped from here. task_proc_all shows
a number of tasks too and it shows 100230.
>
> >
> > $ time cat /proc/all/status > /dev/null
> >
> > real 0m0.577s
> > user 0m0.017s
> > sys 0m0.559s
> >
> > task_proc_all is used to read /proc/pid/status for all tasks:
> > https://github.com/avagin/linux-task-diag/blob/master/tools/testing/selftests/task_diag/task_proc_all.c
> >
> > $ time ./task_proc_all status
> > tasks: 100230
> >
Thanks,
Andrei