Re: [PATCH v2] sched/pelt: Keep UTIL_AVG_UNCHANGED flag in sync when calculating last_enqueued_diff

From: Vincent Donnefort
Date: Fri May 07 2021 - 08:35:37 EST


On Fri, May 07, 2021 at 07:20:31PM +0800, Xuewen Yan wrote:
> From: Xuewen Yan <xuewen.yan@xxxxxxxxxx>
>
> Last_enqueued_diff's meaning: "diff = util_est.enqueued(p) - task_util(p)".
> When calculating last_enqueued_diff, we add UTIL_AVG_UNCHANGED flag, which
> is the LSB, for task_util, but don't add the flag for util_est.enqueued.
> However the enqueued's flag had been cleared when the task util changed.
> As a result, we add +1 to the diff, this is therefore reducing slightly
> UTIL_EST_MARGIN.

Unless I miss something it actually depends on the situation, doesn't it?

if ue.enqueued > task_util(), we remove 1 from the diff => UTIL_EST_MARGIN + 1

if ue.enqueued < task_util(), we add 1 to the diff => UTIL_EST_MARGIN -1

>
> Add the flag for util_est.enqueued to have a accurate computation.
>
> Fixes: b89997aa88f0b sched/pelt: Fix task util_est update filtering
>
> Signed-off-by: Xuewen Yan <xuewen.yan@xxxxxxxxxx>
> ---
> kernel/sched/fair.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e5e457fa9dc8..94d77b4fa601 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3996,7 +3996,7 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
> if (ue.enqueued & UTIL_AVG_UNCHANGED)
> return;
>
> - last_enqueued_diff = ue.enqueued;
> + last_enqueued_diff = (ue.enqueued | UTIL_AVG_UNCHANGED);
>
> /*
> * Reset EWMA on utilization increases, the moving average is used only
> --
> 2.29.0
>