Re: [PATCH] sched/uclamp: Let each sched_class handle uclamp

From: Dietmar Eggemann
Date: Mon Mar 10 2025 - 06:55:50 EST


On 10/03/2025 03:41, Xuewen Yan wrote:
> On Sat, Mar 8, 2025 at 2:32 AM Dietmar Eggemann
> <dietmar.eggemann@xxxxxxx> wrote:
>>
>> On 06/03/2025 13:01, Xuewen Yan wrote:
>>> On Thu, Mar 6, 2025 at 2:24 AM Dietmar Eggemann
>>> <dietmar.eggemann@xxxxxxx> wrote:
>>>>
>>>> On 27/02/2025 14:54, Hongyan Xia wrote:
>>>>
>>>> [...]
>>>>
>>>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>>>> index 857808da23d8..7e5a653811ad 100644
>>>>> --- a/kernel/sched/fair.c
>>>>> +++ b/kernel/sched/fair.c
>>>>> @@ -6941,8 +6941,10 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
>>>>> * Let's add the task's estimated utilization to the cfs_rq's
>>>>> * estimated utilization, before we update schedutil.
>>>>> */
>>>>> - if (!(p->se.sched_delayed && (task_on_rq_migrating(p) || (flags & ENQUEUE_RESTORE))))
>>>>> + if (!(p->se.sched_delayed && (task_on_rq_migrating(p) || (flags & ENQUEUE_RESTORE)))) {
>>>>> + uclamp_rq_inc(rq, p);
>>>>> util_est_enqueue(&rq->cfs, p);
>>>>> + }
>>>>
>>>> So you want to have p uclamp-enqueued so that its uclamp_min value
>>>> counts for the cpufreq_update_util()/cfs_rq_util_change() calls later in
>>>> enqueue_task_fair?
>>>>
>>>> if (p->in_iowait)
>>>> cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT);
>>>>
>>>> enqueue_entity() -> update_load_avg() -> cfs_rq_util_change() ->
>>>> cpufreq_update_util()
>>>>
>>>> But if you do this before requeue_delayed_entity() (1) you will not
>>>> uclamp-enqueue p which got his ->sched_delayed just cleared in (1)?
>>>>
>>>
>>> Could we change to the following:
>>>
>>> when enqueue:
>>>
>>> - if (!(p->se.sched_delayed && (task_on_rq_migrating(p) || (flags
>>> & ENQUEUE_RESTORE))))
>>> + if (!(p->se.sched_delayed && !(flags & ENQUEUE_DELAYED)))
>>
>> Why you want to check ENQUEUE_DELAYED as well here? Isn't
>> !p->se.sched_delayed implying !ENQUEUE_DELAYED).
>
> Indeed, the (!(p->se.sched_delayed && !(flags & ENQUEUE_DELAYED))) is equal to
> the (!(p->se.sched_delayed && (task_on_rq_migrating(p) || (flags &
> ENQUEUE_RESTORE)))).
> I just think it might be easier to read using the ENQUEUE_DELAYED flag.
> Because we only allow enq the uclamp and util_est when wake up the delayed-task.

OK, I see.

So that means we would not have to move the uclamp handling into the sched
classes necessarily, we could use flags in enqueue_task() as well:

-->8--

Subject: [PATCH] Align uclamp and util_est and call before freq update

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@xxxxxxx>
---
kernel/sched/core.c | 14 ++++++++------
kernel/sched/fair.c | 4 ++--
2 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index b60916d77482..f833108a3b2d 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1747,7 +1747,8 @@ static inline void uclamp_rq_dec_id(struct rq *rq, struct task_struct *p,
}
}

-static inline void uclamp_rq_inc(struct rq *rq, struct task_struct *p)
+static inline void uclamp_rq_inc(struct rq *rq, struct task_struct *p,
+ int flags)
{
enum uclamp_id clamp_id;

@@ -1763,7 +1764,7 @@ static inline void uclamp_rq_inc(struct rq *rq, struct task_struct *p)
if (unlikely(!p->sched_class->uclamp_enabled))
return;

- if (p->se.sched_delayed)
+ if (p->se.sched_delayed && !(flags & ENQUEUE_DELAYED))
return;

for_each_clamp_id(clamp_id)
@@ -2067,12 +2068,13 @@ void enqueue_task(struct rq *rq, struct task_struct *p, int flags)
if (!(flags & ENQUEUE_NOCLOCK))
update_rq_clock(rq);

- p->sched_class->enqueue_task(rq, p, flags);
/*
- * Must be after ->enqueue_task() because ENQUEUE_DELAYED can clear
- * ->sched_delayed.
+ * Can be before ->enqueue_task() because uclamp considers the
+ * ENQUEUE_DELAYED task before its ->sched_delayed gets cleared
+ * in ->enqueue_task().
*/
- uclamp_rq_inc(rq, p);
+ uclamp_rq_inc(rq, p, flags);
+ p->sched_class->enqueue_task(rq, p, flags);

psi_enqueue(p, flags);

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 061a29e88ee2..e26d1dfea601 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6951,7 +6951,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
* Let's add the task's estimated utilization to the cfs_rq's
* estimated utilization, before we update schedutil.
*/
- if (!(p->se.sched_delayed && (task_on_rq_migrating(p) || (flags & ENQUEUE_RESTORE))))
+ if (!p->se.sched_delayed || (flags & ENQUEUE_DELAYED))
util_est_enqueue(&rq->cfs, p);

if (flags & ENQUEUE_DELAYED) {
@@ -7193,7 +7193,7 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags)
*/
static bool dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
{
- if (!(p->se.sched_delayed && (task_on_rq_migrating(p) || (flags & DEQUEUE_SAVE))))
+ if (!p->se.sched_delayed)
util_est_dequeue(&rq->cfs, p);

util_est_update(&rq->cfs, p, flags & DEQUEUE_SLEEP);
--
2.34.1