Re: [PATCH] proc/stat: Separate out individual irq counts into /proc/stat_irqs

From: Alexey Dobriyan
Date: Thu Apr 19 2018 - 16:02:29 EST


On Thu, Apr 19, 2018 at 12:43:19PM -0700, Andrew Morton wrote:
> On Thu, 19 Apr 2018 13:09:29 -0400 Waiman Long <longman@xxxxxxxxxx> wrote:
>
> > It was found that reading /proc/stat could be time consuming on
> > systems with a lot of irqs. For example, reading /proc/stat in a
> > certain 2-socket Skylake server took about 4.6ms because it had over
> > 5k irqs. In that particular case, the majority of the CPU cycles for
> > reading /proc/stat was spent in the kstat_irqs() function. Therefore,
> > application performance can be impacted if the application reads
> > /proc/stat rather frequently.
> >
> > The "intr" line within /proc/stat contains a sum total of all the irqs
> > that have happened followed by a list of irq counts for each individual
> > irq number. In many cases, the first number is good enough. The
> > individual irq counts may not provide that much more information.
> >
> > In order to avoid this kind of performance issue, all these individual
> > irq counts are now separated into a new /proc/stat_irqs file. The
> > sum total irq count will stay in /proc/stat and be duplicated in
> > /proc/stat_irqs. Applications that need to look up individual irq counts
> > will now have to look into /proc/stat_irqs instead of /proc/stat.
> >
>
> (cc /proc maintainer)
>
> It's a non-backward-compatible change. For something which has
> existing for so long, it would be a mighty task to demonstrate that no
> existing userspace will be disrupted by this change.
>
> So we need to think again. A new interface which omits the per-IRQ
> stats might be acceptable.

Here is profile of open+read+close /proc/stat

30% is taking mutex only to print "0".

+ 98.80% 0.04% a.out [kernel.vmlinux] [k] entry_SYSCALL_64 â
+ 98.75% 0.10% a.out [kernel.vmlinux] [k] do_syscall_64 â
+ 95.56% 0.04% a.out libc-2.25.so [.] __GI___libc_read â
+ 95.09% 0.01% a.out [kernel.vmlinux] [k] sys_read â
+ 95.04% 0.03% a.out [kernel.vmlinux] [k] vfs_read â
+ 94.98% 0.05% a.out [kernel.vmlinux] [k] proc_reg_read â
+ 94.98% 0.00% a.out [kernel.vmlinux] [k] __vfs_read â
+ 94.92% 0.06% a.out [kernel.vmlinux] [k] seq_read â
+ 94.52% 3.65% a.out [kernel.vmlinux] [k] show_stat â
+ 48.62% 2.59% a.out [kernel.vmlinux] [k] kstat_irqs_usr â
+ 33.52% 9.55% a.out [kernel.vmlinux] [k] seq_put_decimal_ull â
+ 19.63% 19.59% a.out [kernel.vmlinux] [k] memcpy_erms â
+ 17.34% 9.53% a.out [kernel.vmlinux] [k] kstat_irqs â
- 15.45% 15.43% a.out [kernel.vmlinux] [k] mutex_lock â
15.43% __GI___libc_read â
entry_SYSCALL_64 â
do_syscall_64 â
sys_read â
vfs_read â
__vfs_read â
proc_reg_read â
- seq_read â
- 15.41% show_stat â
kstat_irqs_usr â
mutex_lock â
+ 13.32% 13.27% a.out [kernel.vmlinux] [k] mutex_unlock â
+ 4.60% 1.35% a.out [kernel.vmlinux] [k] cpumask_next â
+ 3.03% 3.03% a.out [kernel.vmlinux] [k] __radix_tree_lookup â
+ 2.96% 0.08% a.out [kernel.vmlinux] [k] seq_printf â
+ 2.92% 0.02% a.out libc-2.25.so [.] __GI___libc_open â
+ 2.89% 0.07% a.out [kernel.vmlinux] [k] seq_vprintf â
+ 2.81% 0.70% a.out [kernel.vmlinux] [k] vsnprintf â
+ 2.66% 2.66% a.out [kernel.vmlinux] [k] _find_next_bit â
+ 2.42% 1.36% a.out [kernel.vmlinux] [k] num_to_str â
+ 2.41% 0.19% a.out [kernel.vmlinux] [k] get_idle_time â
+ 2.39% 0.02% a.out [kernel.vmlinux] [k] do_sys_open