Re: [PATCH v2] sched/pelt: Keep UTIL_AVG_UNCHANGED flag in sync when calculating last_enqueued_diff
From: Vincent Donnefort
Date: Fri May 07 2021 - 13:14:50 EST
[...]
> >
> > But we take the abs() of last_enqueued_diff.
> >
> > If we consider the following examples:
> >
> > enqueued_old = 5, enqueued_new = 9
> > diff = 5 - (9 + 1) => 5 - 10 => -5
> >
> > enqueued_old = 9, enqueued_new = 5
> > diff = 9 - (5 + 1) => 9 - 6 => 3
> >
> > In both cases the delta is supposed to be 4. But in the first case we end-up
> > with 5. In the second, we end-up with 3. That's why I meant the effect on the
> > diff depends on who's greater, ue.enqueued or task_util().
>
> Ah, OK, due to the abs() in within_margin(). But util's LSB is lost due
> to the flag anyway. Hence I assume e.g. enqueued_new = 9 should be
> (task_util() = 8) + 1 (| flag) in the example.
Yeah, I should have used an even number for the demonstration :-)
>
> OTHA, implementing UTIL_AVG_UNCHANGED as LSB and making this visible on
> the util_est 'API' has other issues too. The condition
> `!task_util_est(p)` can never be true in find_energy_efficient_cpu()
> because of UTIL_AVG_UNCHANGED.
>
> So why not use `UTIL_AVG_UNCHANGED = 0x80000000` and just keep its use
> internal (between cfs_se_util_change() and util_est_update()), i.e. not
> exporting it (via _task_util_est()) and not eclipsing util_est's LSB
> value?
As this would be fixing two issues at once, it's probably preferable.
[...]
> kernel/sched/fair.c | 5 +++--
> kernel/sched/pelt.h | 11 ++++++-----
> 2 files changed, 9 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 1d75af1ecfb4..dd30e362c3cc 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3902,7 +3902,7 @@ static inline unsigned long _task_util_est(struct task_struct *p)
> {
> struct util_est ue = READ_ONCE(p->se.avg.util_est);
>
> - return (max(ue.ewma, ue.enqueued) | UTIL_AVG_UNCHANGED);
> + return max(ue.ewma, (ue.enqueued & ~UTIL_AVG_UNCHANGED));
> }
>
> static inline unsigned long task_util_est(struct task_struct *p)
> @@ -4002,7 +4002,7 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
> * Reset EWMA on utilization increases, the moving average is used only
> * to smooth utilization decreases.
> */
Needs to be updated to add:
last_enqueued_diff = ue.enqueued & ~UTIL_AVG_UNCHANGED;
> - ue.enqueued = (task_util(p) | UTIL_AVG_UNCHANGED);
> + ue.enqueued = task_util(p);
> if (sched_feat(UTIL_EST_FASTUP)) {
> if (ue.ewma < ue.enqueued) {
> ue.ewma = ue.enqueued;
> @@ -4051,6 +4051,7 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
> ue.ewma += last_ewma_diff;
> ue.ewma >>= UTIL_EST_WEIGHT_SHIFT;
> done:
> + ue.enqueued |= UTIL_AVG_UNCHANGED;
> WRITE_ONCE(p->se.avg.util_est, ue);
>
> trace_sched_util_est_se_tp(&p->se);
> diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
> index 1462846d244e..476faf61f14a 100644