Re: [PATCH 2/2] sched/core: Avoid unnecessary update in tg_set_cfs_bandwidth

From: Chengming Zhou
Date: Mon Jul 22 2024 - 03:21:59 EST


On 2024/7/21 20:52, Chuyi Zhou wrote:
In the kubernetes production environment, we have observed a high
frequency of writes to cpu.max, approximately every 2~4 seconds for each
cgroup, with the same value being written each time. This can result in
unnecessary overhead, especially on machines with a large number of CPUs
and cgroups.

This is because kubelet and runc attempt to persist resource
configurations through frequent updates with same value in this manner.

Ok.

While optimizations can be made to kubelet and runc to avoid such
overhead(e.g. check the current value of cpu request/limit before writing
to cpu.max), it is still worth to bail out from tg_set_cfs_bandwidth() if
we attempt to update with the same value.

Yeah, we can optimize this situation with a little of checking code,
seems worthwhile to do IMHO.


Signed-off-by: Chuyi Zhou <zhouchuyi@xxxxxxxxxxxxx>
---
kernel/sched/core.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 6d35c48239be..4db3ef2a703b 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -9081,6 +9081,8 @@ static int tg_set_cfs_bandwidth(struct task_group *tg, u64 period, u64 quota,
burst + quota > max_cfs_runtime))
return -EINVAL;
+ if (cfs_b->period == ns_to_ktime(period) && cfs_b->quota == quota && cfs_b->burst == burst)
+ return 0;

Maybe we'd better do these checkings under the lock protection, right?

Thanks.

/*
* Prevent race between setting of cfs_rq->runtime_enabled and
* unthrottle_offline_cfs_rqs().