Re: [PATCH] cgroup/rstat: avoid disabling irqs for O(num_cpu)
From: Mateusz Guzik
Date: Thu Mar 27 2025 - 13:48:20 EST
On Thu, Mar 27, 2025 at 6:17 PM Yosry Ahmed <yosry.ahmed@xxxxxxxxx> wrote:
>
> On Thu, Mar 27, 2025 at 03:38:50PM +0100, Mateusz Guzik wrote:
> > On Wed, Mar 19, 2025 at 05:18:05PM +0000, Yosry Ahmed wrote:
> > > On Wed, Mar 19, 2025 at 11:47:32AM +0100, Mateusz Guzik wrote:
> > > > Is not this going a little too far?
> > > >
> > > > the lock + irq trip is quite expensive in its own right and now is
> > > > going to be paid for each cpu, as in the total time spent executing
> > > > cgroup_rstat_flush_locked is going to go up.
> > > >
> > > > Would your problem go away toggling this every -- say -- 8 cpus?
> > >
> > > I was concerned about this too, and about more lock bouncing, but the
> > > testing suggests that this actually overall improves the latency of
> > > cgroup_rstat_flush_locked() (at least on tested HW).
> > >
> > > So I don't think we need to do something like this unless a regression
> > > is observed.
> > >
> >
> > To my reading it reduces max time spent with irq disabled, which of
> > course it does -- after all it toggles it for every CPU.
> >
> > Per my other e-mail in the thread the irq + lock trips remain not cheap
> > at least on Sapphire Rapids.
> >
> > In my testing outlined below I see 11% increase in total execution time
> > with the irq + lock trip for every CPU in a 24-way vm.
> >
> > So I stand by instead doing this every n CPUs, call it 8 or whatever.
> >
> > How to repro:
> >
> > I employed a poor-man's profiler like so:
> >
> > bpftrace -e 'kprobe:cgroup_rstat_flush_locked { @start[tid] = nsecs; } kretprobe:cgroup_rstat_flush_locked /@start[tid]/ { print(nsecs - @start[tid]); delete(@start[tid]); } interval:s:60 { exit(); }'
> >
> > This patch or not, execution time varies wildly even while the box is idle.
> >
> > The above runs for a minute, collecting 23 samples (you may get
> > "lucky" and get one extra, in that case remove it for comparison).
> >
> > A sysctl was added to toggle the new behavior vs old one. Patch at the
> > end.
> >
> > "enabled"(1) means new behavior, "disabled"(0) means the old one.
> >
> > Sum of nsecs (results piped to: awk '{ sum += $1 } END { print sum }'):
> > disabled: 903610
> > enabled: 1006833 (+11.4%)
>
> IIUC this calculates the amount of elapsed time between start and
> finish, not necessarily the function's own execution time. Is it
> possible that the increase in time is due to more interrupts arriving
> during the function execution (which is what we want), rather than more
> time being spent on disabling/enabling IRQs?
I can agree irq handlers have more opportunities to execute in the
toggling case and that the time accounted in the way above will
include them. I don't think explains it, but why not, let's test
without this problem.
I feel compelled to note atomics on x86-64 were expensive for as long
as the architecture was around so I'm confused what's up with the
resistance to the notion that they remain costly even with modern
uarchs. If anything, imo claims that they are cheap require strong
evidence.
That said, I modified the patch to add a section which issues
conditional relock if needed and smp_mb otherwise -- irqs remain
disabled, but we are still paying for a full fence. smp_mb is a lock
add $0 on the stack pointer. Note this has less work to do than what
was added in your patch.
It looks like this:
switch (READ_ONCE(magic_tunable)) {
case 1:
__cgroup_rstat_unlock(cgrp, cpu);
if (!cond_resched())
cpu_relax();
__cgroup_rstat_lock(cgrp, cpu);
break;
case 2:
if (need_resched() || spin_needbreak(&cgroup_rstat_lock)) {
__cgroup_rstat_unlock(cgrp, cpu);
if (!cond_resched())
cpu_relax();
__cgroup_rstat_lock(cgrp, cpu);
} else {
smp_mb();
}
break;
default:
if (need_resched() || spin_needbreak(&cgroup_rstat_lock)) {
__cgroup_rstat_unlock(cgrp, cpu);
if (!cond_resched())
cpu_relax();
__cgroup_rstat_lock(cgrp, cpu);
}
break;
}
With this in place I'm seeing about 4% increase in execution time
measured the same way, so irq handlers sneaking in don't explain it.
Note smp_mb() alone is a smaller cost than the locked instruction +
func calls + irq trips. I also state I'm running this in a VM
(24-way), where paravirt spinlocks also issue a lock-prefixed
instruction to release the lock. I would say this very much justifies
the original claim of 11% with the patch as proposed.
--
Mateusz Guzik <mjguzik gmail.com>