Re: [PATCH] sched/pelt: Add UTIL_AVG_UNCHANGED flag for last_enqueued_diff

From: Vincent Donnefort
Date: Thu May 06 2021 - 08:28:38 EST


On Thu, May 06, 2021 at 07:09:36PM +0800, Xuewen Yan wrote:
> From: Xuewen Yan <xuewen.yan@xxxxxxxxxx>
>
> The UTIL_AVG_UNCHANGED flag had been cleared when the task util changed.
> And the enqueued is equal to task_util with the flag, so it is better
> to add the UTIL_AVG_UNCHANGED flag for last_enqueued_diff.
>
> Fixes: b89997aa88f0b sched/pelt: Fix task util_est update filtering
>
> Signed-off-by: Xuewen Yan <xuewen.yan@xxxxxxxxxx>
> ---
> kernel/sched/fair.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e5e457fa9dc8..94d77b4fa601 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3996,7 +3996,7 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
> if (ue.enqueued & UTIL_AVG_UNCHANGED)
> return;
>
> - last_enqueued_diff = ue.enqueued;
> + last_enqueued_diff = (ue.enqueued | UTIL_AVG_UNCHANGED);
>
> /*
> * Reset EWMA on utilization increases, the moving average is used only
> --
> 2.29.0
>

Hi,

We do indeed for the diff use the flag for the value updated and no flag for the
value before the update. However, last_enqueued_diff is only used for the margin
check which is an heuristic and is not an accurate value (~1%) and as we know
we already loose the LSB in util_est, I'm not sure this is really necessary.

--
Vincent