Re: [PATCH V6] cgroup/rstat: Avoid thundering herd problem by kswapd across NUMA nodes

From: Jesper Dangaard Brouer
Date: Wed Jul 10 2024 - 08:24:59 EST




On 10/07/2024 01.17, Shakeel Butt wrote:
On Tue, Jul 09, 2024 at 01:20:48PM GMT, Jesper Dangaard Brouer wrote:
Avoid lock contention on the global cgroup rstat lock caused by kswapd
starting on all NUMA nodes simultaneously. At Cloudflare, we observed
massive issues due to kswapd and the specific mem_cgroup_flush_stats()
call inlined in shrink_node, which takes the rstat lock.

On our 12 NUMA node machines, each with a kswapd kthread per NUMA node,
we noted severe lock contention on the rstat lock. This contention
causes 12 CPUs to waste cycles spinning every time kswapd runs.
Fleet-wide stats (/proc/N/schedstat) for kthreads revealed that we are
burning an average of 20,000 CPU cores fleet-wide on kswapd, primarily
due to spinning on the rstat lock.

Help reviewers follow code: __alloc_pages_slowpath calls wake_all_kswapds
causing all kswapdN threads to wake up simultaneously. The kswapd thread
invokes shrink_node (via balance_pgdat) triggering the cgroup rstat flush
operation as part of its work. This results in kernel self-induced rstat
lock contention by waking up all kswapd threads simultaneously. Leveraging
this detail: balance_pgdat() have NULL value in target_mem_cgroup, this
cause mem_cgroup_flush_stats() to do flush with root_mem_cgroup.

To avoid this kind of thundering herd problem, kernel previously had a
"stats_flush_ongoing" concept, but this was removed as part of commit
7d7ef0a4686a ("mm: memcg: restore subtree stats flushing"). This patch
reintroduce and generalized the concept to apply to all users of cgroup
rstat, not just memcg.

If there is an ongoing rstat flush, and current cgroup is a descendant,
then it is unnecessary to do the flush. For callers to still see updated
stats, wait for ongoing flusher to complete before returning, but add
timeout as stats are already inaccurate given updaters keeps running.

Fixes: 7d7ef0a4686a ("mm: memcg: restore subtree stats flushing").
Signed-off-by: Jesper Dangaard Brouer <hawk@xxxxxxxxxx>
---
V5: https://lore.kernel.org/all/171956951930.1897969.8709279863947931285.stgit@firesoul/

Does this version fixes the contention you are observing in production
for v5?

No conclusions yet, as I'm still waiting for production servers to
reboot into my experimental kernel.

The V5 contention issue is observable via oneliner. That records lock
contention and records the process that observe this:

sudo bpftrace -e '
tracepoint:cgroup:cgroup_rstat_lock_contended { @cnt[comm]=count()}
interval:s:1 {time("%H:%M:%S "); print(@cnt); clear(@cnt);}'

Example output:

11:52:34
11:52:35 @cnt[kswapd4]: 114
@cnt[kswapd5]: 115
11:52:36
11:52:37
11:52:38
11:52:39
11:52:40
11:52:41 @cnt[kswapd2]: 124
@cnt[kswapd1]: 125
@cnt[kswapd7]: 137
@cnt[kswapd0]: 137

As we can see above kswapd processes, that must be flushing root-cgroup
and should be waiting on cgrp_rstat_ongoing_flusher are seeing
lock_contended. This indicate the race this patch address exists.

For the record without this patch prod server (same HW generation), looks like this (so there is a significant improvement):

12:08:59 @cnt[kswapd2]: 565
@cnt[kswapd8]: 574
@cnt[kswapd9]: 575
@cnt[kswapd5]: 576
@cnt[kswapd6]: 577
@cnt[kswapd11]: 577
@cnt[kswapd3]: 578
@cnt[kswapd0]: 578
@cnt[kswapd4]: 688
@cnt[kswapd10]: 758
@cnt[kswapd1]: 768
@cnt[kswapd7]: 875


I'm going to send a V7 patch, because this V6 have an issue with usage of tracepoints for trylock scheme, that breaks my bpftrace script[1].

Coding it up now... I'm also adding a tracepoint for the cgrp_rstat_ongoing_flusher wait, such that we can measure this. I'm also adding a race indicator that can we read from this new tracepoint, as it will be helpful to proof/measure if this race is happening, and needed to tell the race apart from normal cgroup_rstat_lock_contended case.


--Jesper


[1] https://github.com/xdp-project/xdp-project/blob/master/areas/latency/cgroup_rstat_tracepoint.bt