Re: [PATCH] cgroup/rstat: avoid disabling irqs for O(num_cpu)
From: Yosry Ahmed
Date: Thu Mar 27 2025 - 13:17:28 EST
On Thu, Mar 27, 2025 at 03:38:50PM +0100, Mateusz Guzik wrote:
> On Wed, Mar 19, 2025 at 05:18:05PM +0000, Yosry Ahmed wrote:
> > On Wed, Mar 19, 2025 at 11:47:32AM +0100, Mateusz Guzik wrote:
> > > Is not this going a little too far?
> > >
> > > the lock + irq trip is quite expensive in its own right and now is
> > > going to be paid for each cpu, as in the total time spent executing
> > > cgroup_rstat_flush_locked is going to go up.
> > >
> > > Would your problem go away toggling this every -- say -- 8 cpus?
> >
> > I was concerned about this too, and about more lock bouncing, but the
> > testing suggests that this actually overall improves the latency of
> > cgroup_rstat_flush_locked() (at least on tested HW).
> >
> > So I don't think we need to do something like this unless a regression
> > is observed.
> >
>
> To my reading it reduces max time spent with irq disabled, which of
> course it does -- after all it toggles it for every CPU.
>
> Per my other e-mail in the thread the irq + lock trips remain not cheap
> at least on Sapphire Rapids.
>
> In my testing outlined below I see 11% increase in total execution time
> with the irq + lock trip for every CPU in a 24-way vm.
>
> So I stand by instead doing this every n CPUs, call it 8 or whatever.
>
> How to repro:
>
> I employed a poor-man's profiler like so:
>
> bpftrace -e 'kprobe:cgroup_rstat_flush_locked { @start[tid] = nsecs; } kretprobe:cgroup_rstat_flush_locked /@start[tid]/ { print(nsecs - @start[tid]); delete(@start[tid]); } interval:s:60 { exit(); }'
>
> This patch or not, execution time varies wildly even while the box is idle.
>
> The above runs for a minute, collecting 23 samples (you may get
> "lucky" and get one extra, in that case remove it for comparison).
>
> A sysctl was added to toggle the new behavior vs old one. Patch at the
> end.
>
> "enabled"(1) means new behavior, "disabled"(0) means the old one.
>
> Sum of nsecs (results piped to: awk '{ sum += $1 } END { print sum }'):
> disabled: 903610
> enabled: 1006833 (+11.4%)
IIUC this calculates the amount of elapsed time between start and
finish, not necessarily the function's own execution time. Is it
possible that the increase in time is due to more interrupts arriving
during the function execution (which is what we want), rather than more
time being spent on disabling/enabling IRQs?