On Wed, Jul 5, 2023 at 11:20 PM Shakeel Butt <shakeelb@xxxxxxxxxx> wrote:
On Fri, Jun 30, 2023 at 04:22:28PM -0700, Ivan Babrou wrote:I'm afraid there's no easy way to reproduce. We see it from time to
Hello,Hi Ivan,
We're seeing CPU load issues with cgroup stats retrieval. I made a
public gist with all the details, including the repro code (which
unfortunately requires heavily loaded hardware) and some flamegraphs:
* https://gist.github.com/bobrik/5ba58fb75a48620a1965026ad30a0a13
I'll repeat the gist of that gist here. Our repro has the following
output after a warm-up run:
completed: 5.17s [manual / mem-stat + cpu-stat]
completed: 5.59s [manual / cpu-stat + mem-stat]
completed: 0.52s [manual / mem-stat]
completed: 0.04s [manual / cpu-stat]
The first two lines do effectively the following:
for _ in $(seq 1 1000); do cat /sys/fs/cgroup/system.slice/memory.stat
/sys/fs/cgroup/system.slice/cpu.stat > /dev/null
The latter two are the same thing, but via two loops:
for _ in $(seq 1 1000); do cat /sys/fs/cgroup/system.slice/cpu.stat >
/dev/null; done
for _ in $(seq 1 1000); do cat /sys/fs/cgroup/system.slice/memory.stat
/dev/null; doneAs you might've noticed from the output, splitting the loop into two
makes the code run 10x faster. This isn't great, because most
monitoring software likes to get all stats for one service before
reading the stats for the next one, which maps to the slow and
expensive way of doing this.
We're running Linux v6.1 (the output is from v6.1.25) with no patches
that touch the cgroup or mm subsystems, so you can assume vanilla
kernel.
From the flamegraph it just looks like rstat flushing takes longer. I
used the following flags on an AMD EPYC 7642 system (our usual pick
cpu-clock was blaming spinlock irqrestore, which was questionable):
perf -e cycles -g --call-graph fp -F 999 -- /tmp/repro
Naturally, there are two questions that arise:
* Is this expected (I guess not, but good to be sure)?
* What can we do to make this better?
I am happy to try out patches or to do some tracing to help understand
this better.
Thanks a lot, as always, for reporting this. This is not expected and
should be fixed. Is the issue easy to repro or some specific workload or
high load/traffic is required? Can you repro this with the latest linus
tree? Also do you see any difference of root's cgroup.stat where this
issue happens vs good state?
time in different locations. The one that I was looking at for the
initial email does not reproduce it anymore: