Re: [RFC PATCH 2/7] sched/fair: Handle throttle path for task based throttle
From: Aaron Lu
Date: Mon Mar 24 2025 - 04:59:35 EST
On Thu, Mar 20, 2025 at 11:40:11AM -0700, Xi Wang wrote:
...
> I am a bit unsure about the overhead experiment results. Maybe we can add some
> counters to check how many cgroups per cpu are actually touched and how many
> threads are actually dequeued / enqueued for throttling / unthrottling?
Sure thing.
> Looks like busy loop workloads were used for the experiment. With throttling
> deferred to exit_to_user_mode, it would only be triggered by ticks. A large
> runtime debt can accumulate before the on cpu threads are actually dequeued.
> (Also noted in https://lore.kernel.org/lkml/20240711130004.2157737-11-vschneid@xxxxxxxxxx/)
>
> distribute_cfs_runtime would finish early if the quotas are used up by the first
> few cpus, which would also result in throttling/unthrottling for only a few
> runqueues per period. An intermittent workload like hackbench may give us more
> information.
I've added some trace prints and noticed it already invovled almost all
cpu rqs on that 2sockets/384cpus test system, so I suppose it's OK to
continue use that setup as described before:
https://lore.kernel.org/lkml/CANCG0GdOwS7WO0k5Fb+hMd8R-4J_exPTt2aS3-0fAMUC5pVD8g@xxxxxxxxxxxxxx/
Below is one print sample:
<idle>-0 [214] d.h1. 1879.281972: distribute_cfs_runtime: cpu214: begins <idle>-0 [214] dNh2. 1879.283564: distribute_cfs_runtime: cpu214: finishes. unthrottled rqs=383, unthro
ttled_cfs_rq=1101, unthrottled_task=69
With async unthrottle, it's not possible to account exactly how many
cfs_rqs are unthrottled and how many tasks are enqueued back, just
how many rqs are involved and how many cfs_rqs/tasks the local cpu has
unthrottled. So this sample means in distribute_cfs_runtime(), 383 rqs
are involved and the local cpu has unthrottled 1101 cfs_rqs and a total
of 69 tasks are enqueued back.
The corresponding bpftrace(duration of distribute_cfs_runtime(), in
nano-seconds) is:
@durations:
[4K, 8K) 9 | |
[8K, 16K) 227 |@@@@@@@@@@@@@@ |
[16K, 32K) 120 |@@@@@@@ |
[32K, 64K) 70 |@@@@ |
[64K, 128K) 0 | |
[128K, 256K) 0 | |
[256K, 512K) 0 | |
[512K, 1M) 158 |@@@@@@@@@ |
[1M, 2M) 832 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[2M, 4M) 177 |@@@@@@@@@@@ |
Thanks,
Aaron
> See slide 10 of my presentation for more info:
> https://lpc.events/event/18/contributions/1883/attachments/1420/3040/Priority%20Inheritance%20for%20CFS%20Bandwidth%20Control.pdf
>
> Indeed we are seeing more cfsb scalability problems with larger servers. Both
> the irq off time from the cgroup walk and the overheads from per task actions
> can be problematic.
>
> -Xi
Subject: [DEBUG PATCH] sched/fair: add profile for distribute_cfs_runtime()
---
kernel/sched/fair.c | 10 ++++++++++
kernel/sched/sched.h | 2 ++
2 files changed, 12 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d646451d617c1..a4e3780c076e3 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5922,10 +5922,12 @@ static int tg_unthrottle_up(struct task_group *tg, void *data)
cfs_rq->throttled_clock_self_time += delta;
}
+ rq->unthrottled_cfs_rq++;
/* Re-enqueue the tasks that have been throttled at this level. */
list_for_each_entry_safe(p, tmp, &cfs_rq->throttled_limbo_list, throttle_node) {
list_del_init(&p->throttle_node);
enqueue_task_fair(rq_of(cfs_rq), p, ENQUEUE_WAKEUP);
+ rq->unthrottled_task++;
}
/* Add cfs_rq with load or one or more already running entities to the list */
@@ -6192,6 +6194,9 @@ static bool distribute_cfs_runtime(struct cfs_bandwidth *cfs_b)
struct rq_flags rf;
struct rq *rq;
LIST_HEAD(local_unthrottle);
+ unsigned int unthrottled_rqs = 0;
+
+ trace_printk("cpu%d: begins\n", this_cpu);
rcu_read_lock();
list_for_each_entry_rcu(cfs_rq, &cfs_b->throttled_cfs_rq,
@@ -6228,6 +6233,7 @@ static bool distribute_cfs_runtime(struct cfs_bandwidth *cfs_b)
if (cfs_rq->runtime_remaining > 0) {
if (cpu_of(rq) != this_cpu) {
unthrottle_cfs_rq_async(cfs_rq);
+ unthrottled_rqs++;
} else {
/*
* We currently only expect to be unthrottling
@@ -6250,12 +6256,16 @@ static bool distribute_cfs_runtime(struct cfs_bandwidth *cfs_b)
struct rq *rq = rq_of(cfs_rq);
rq_lock_irqsave(rq, &rf);
+ rq->unthrottled_cfs_rq = rq->unthrottled_task = 0;
list_del_init(&cfs_rq->throttled_csd_list);
if (cfs_rq_throttled(cfs_rq))
unthrottle_cfs_rq(cfs_rq);
+ trace_printk("cpu%d: finishes. unthrottled rqs=%u, unthrottled_cfs_rq=%d, unthrottled_task=%d\n",
+ raw_smp_processor_id(), unthrottled_rqs,
+ rq->unthrottled_cfs_rq, rq->unthrottled_task);
rq_unlock_irqrestore(rq, &rf);
}
SCHED_WARN_ON(!list_empty(&local_unthrottle));
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 5c2af5a70163c..d004da2bc9071 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1309,6 +1309,8 @@ struct rq {
#if defined(CONFIG_CFS_BANDWIDTH) && defined(CONFIG_SMP)
call_single_data_t cfsb_csd;
struct list_head cfsb_csd_list;
+ unsigned int unthrottled_cfs_rq;
+ unsigned int unthrottled_task;
#endif
};
--
2.39.5