Re: [PATCH v3] mm: memcg: use rstat for non-hierarchical stats

From: Michal Hocko
Date: Tue Aug 01 2023 - 10:32:01 EST


On Wed 26-07-23 15:32:23, Yosry Ahmed wrote:
> Currently, memcg uses rstat to maintain aggregated hierarchical stats.
> Counters are maintained for hierarchical stats at each memcg. Rstat
> tracks which cgroups have updates on which cpus to keep those counters
> fresh on the read-side.
>
> Non-hierarchical stats are currently not covered by rstat. Their
> per-cpu counters are summed up on every read, which is expensive.
> The original implementation did the same. At some point before rstat,
> non-hierarchical aggregated counters were introduced by
> commit a983b5ebee57 ("mm: memcontrol: fix excessive complexity in
> memory.stat reporting"). However, those counters were updated on the
> performance critical write-side, which caused regressions, so they were
> later removed by commit 815744d75152 ("mm: memcontrol: don't batch
> updates of local VM stats and events"). See [1] for more detailed
> history.
>
> Kernel versions in between a983b5ebee57 & 815744d75152 (a year and a
> half) enjoyed cheap reads of non-hierarchical stats, specifically on
> cgroup v1. When moving to more recent kernels, a performance regression
> for reading non-hierarchical stats is observed.
>
> Now that we have rstat, we know exactly which percpu counters have
> updates for each stat. We can maintain non-hierarchical counters again,
> making reads much more efficient, without affecting the performance
> critical write-side. Hence, add non-hierarchical (i.e local) counters
> for the stats, and extend rstat flushing to keep those up-to-date.
>
> A caveat is that we now need a stats flush before reading
> local/non-hierarchical stats through {memcg/lruvec}_page_state_local()
> or memcg_events_local(), where we previously only needed a flush to
> read hierarchical stats. Most contexts reading non-hierarchical stats
> are already doing a flush, add a flush to the only missing context in
> count_shadow_nodes().
>
> With this patch, reading memory.stat from 1000 memcgs is 3x faster on a
> machine with 256 cpus on cgroup v1:
> # for i in $(seq 1000); do mkdir /sys/fs/cgroup/memory/cg$i; done
> # time cat /dev/cgroup/memory/cg*/memory.stat > /dev/null
> real 0m0.125s
> user 0m0.005s
> sys 0m0.120s
>
> After:
> real 0m0.032s
> user 0m0.005s
> sys 0m0.027s

Have you measured any potential regression for cgroup v2 which collects
all this data without ever using it (AFAICS)?
--
Michal Hocko
SUSE Labs