Re: [PATCH v2] sched/pelt: Use rq_clock_task() for hw_pressure

From: Qais Yousef
Date: Sun Jul 28 2024 - 16:10:40 EST


On 07/25/24 23:08, Chen Yu wrote:
> commit 97450eb90965 ("sched/pelt: Remove shift of thermal clock")
> removed the decay_shift for hw_pressure. This commit uses the
> sched_clock_task() in sched_tick() while it replaces the
> sched_clock_task() with rq_clock_pelt() in __update_blocked_others().
> This could bring inconsistence. One possible scenario I can think of
> is in ___update_load_sum():
>
> u64 delta = now - sa->last_update_time
>
> 'now' could be calculated by rq_clock_pelt() from
> __update_blocked_others(), and last_update_time was calculated by
> rq_clock_task() previously from sched_tick(). Usually the former
> chases after the latter, it cause a very large 'delta' and brings
> unexpected behavior.
>
> Fixes: 97450eb90965 ("sched/pelt: Remove shift of thermal clock")
> Reviewed-by: Hongyan Xia <hongyan.xia2@xxxxxxx>
> Signed-off-by: Chen Yu <yu.c.chen@xxxxxxxxx>
> ---
> v1->v2:
> Added Hongyan's Reviewed-by tag.
> Removed the Reported-by/Closes tags because they are not related
> to this fix.(Hongyan Xia)
> ---
> kernel/sched/fair.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 9057584ec06d..cfd4755954fd 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -9362,7 +9362,7 @@ static bool __update_blocked_others(struct rq *rq, bool *done)
>
> decayed = update_rt_rq_load_avg(now, rq, curr_class == &rt_sched_class) |
> update_dl_rq_load_avg(now, rq, curr_class == &dl_sched_class) |
> - update_hw_load_avg(now, rq, hw_pressure) |
> + update_hw_load_avg(rq_clock_task(rq), rq, hw_pressure) |

NIT:

Wouldn't it be better to remove 'now' and call rq_clock_task() inside
update_hw_load_avg()? Adding a comment on why we should use this not clock_pelt
would be helpful too. hw_pressure doesn't care about invariance.

ie:

update_hw_load_avg(rq, hw_pressure)
{
}

LGTM anyway. I think this is called most of the time from idle when clock_pelt
is synced with clock_task. So the impact is low, I believe.

> update_irq_load_avg(rq, 0);
>
> if (others_have_blocked(rq))
> --
> 2.25.1
>