Re: [RFC PATCH 2/7] sched/fair: Handle throttle path for task based throttle

From: Benjamin Segall
Date: Fri Mar 28 2025 - 18:47:47 EST


K Prateek Nayak <kprateek.nayak@xxxxxxx> writes:

> Hello Chengming,
>
> On 3/17/2025 8:24 AM, Chengming Zhou wrote:
>> On 2025/3/16 11:25, Josh Don wrote:
>>> Hi Aaron,
>>>
>>>>   static int tg_throttle_down(struct task_group *tg, void *data)
>>>>   {
>>>>          struct rq *rq = data;
>>>>          struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
>>>> +       struct task_struct *p;
>>>> +       struct rb_node *node;
>>>> +
>>>> +       cfs_rq->throttle_count++;
>>>> +       if (cfs_rq->throttle_count > 1)
>>>> +               return 0;
>>>>
>>>>          /* group is entering throttled state, stop time */
>>>> -       if (!cfs_rq->throttle_count) {
>>>> -               cfs_rq->throttled_clock_pelt = rq_clock_pelt(rq);
>>>> -               list_del_leaf_cfs_rq(cfs_rq);
>>>> +       cfs_rq->throttled_clock_pelt = rq_clock_pelt(rq);
>>>> +       list_del_leaf_cfs_rq(cfs_rq);
>>>>
>>>> -               SCHED_WARN_ON(cfs_rq->throttled_clock_self);
>>>> -               if (cfs_rq->nr_queued)
>>>> -                       cfs_rq->throttled_clock_self = rq_clock(rq);
>>>> +       SCHED_WARN_ON(cfs_rq->throttled_clock_self);
>>>> +       if (cfs_rq->nr_queued)
>>>> +               cfs_rq->throttled_clock_self = rq_clock(rq);
>>>> +
>>>> +       WARN_ON_ONCE(!list_empty(&cfs_rq->throttled_limbo_list));
>>>> +       /*
>>>> +        * rq_lock is held, current is (obviously) executing this in kernelspace.
>>>> +        *
>>>> +        * All other tasks enqueued on this rq have their saved PC at the
>>>> +        * context switch, so they will go through the kernel before returning
>>>> +        * to userspace. Thus, there are no tasks-in-userspace to handle, just
>>>> +        * install the task_work on all of them.
>>>> +        */
>>>> +       node = rb_first(&cfs_rq->tasks_timeline.rb_root);
>>>> +       while (node) {
>>>> +               struct sched_entity *se = __node_2_se(node);
>>>> +
>>>> +               if (!entity_is_task(se))
>>>> +                       goto next;
>>>> +
>>>> +               p = task_of(se);
>>>> +               task_throttle_setup_work(p);
>>>> +next:
>>>> +               node = rb_next(node);
>>>> +       }
>>>
>>> I'd like to strongly push back on this approach. This adds quite a lot
>>> of extra computation to an already expensive path
>>> (throttle/unthrottle). e.g. this function is part of the cgroup walk
>> Actually, with my suggestion in another email that we only need to mark
>> these cfs_rqs throttled when quote used up, and setup throttle task work
>> when the tasks under those cfs_rqs get picked, the cost of throttle path
>> is much like the dual tree approach.
>> As for unthrottle path, you're right, it has to iterate each task on
>> a list to get it queued into the cfs_rq tree, so it needs more thinking.
>>
>>> and so it is already O(cgroups) for the number of cgroups in the
>>> hierarchy being throttled. This gets even worse when you consider that
>>> we repeat this separately across all the cpus that the
>>> bandwidth-constrained group is running on. Keep in mind that
>>> throttle/unthrottle is done with rq lock held and IRQ disabled.
>> Maybe we can avoid holding rq lock when unthrottle? As for per-task
>> unthrottle, it's much like just waking up lots of tasks, so maybe we
>> can reuse ttwu path to wakeup those tasks, which could utilise remote
>> IPI to avoid holding remote rq locks. I'm not sure, just some random thinking..
>>

Yeah, looping over all the fully throttled tasks in unthrottle still
isn't great (and needing to do a full enqueue operation for each). It's
probably much better than looping over all the runnable tasks here
though.

Remote IPI shouldn't be very helpful, because we already have that in
async unthrottle. Whether or not it's useful to occasionally release the
lock while doing all the per-task unthrottles like loop_break who knows,
but it's certainly easy enough to do while just staying under rcu.

>>>
>>> In K Prateek's last RFC, there was discussion of using context
>>> tracking; did you consider that approach any further? We could keep
>>> track of the number of threads within a cgroup hierarchy currently in
>>> kernel mode (similar to h_nr_runnable), and thus simplify down the
>> Yeah, I think Prateek's approach is very creative! The only downside of
>> it is that the code becomes much complex.. on already complex codebase.
>> Anyway, it would be great that or this could be merged in mainline kernel.
>
> I think the consensus is to move to per-task throttling since it'll
> simplify the efforts to move to a flat hierarchy sometime in the near
> future.
>
> My original approach was complex since i wanted to seamlessly resume the
> pick part on unthrottle. In Ben;s patch [1] there is a TODO in
> pick_next_entity() and that probably worked with the older vruntime based
> CFS algorithm but can break EEVDF guarantees.
>
> [1] https://lore.kernel.org/all/xm26edfxpock.fsf@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/
>
> Unfortunately keeping a single rbtree meant replicating the stats and
> that indeed adds to complexity.
>

Does anything actually rely on those guarantees for correctness in the
scheduler? Would anything actually break if something overrode
pick_next_task_fair to just pick a random runnable task from the rq, or
similar? I'd only expect us to lose out on fairness, and only to the
extent that we're overriding the pick (and not as an ongoing
repercussion from a single unfair pick).

There's still plenty of potential reasons to want to provide better
fairness even between "throttled tasks still running in the kernel" but
I don't think that halfassing it would break EEVDF more than CFS. It
would however be significantly more annoying to duplicate the tree
nowadays with the more data required by entity_eligible.