Re: [RFC PATCH v3 6/6] Propagate negative bias

From: Dietmar Eggemann
Date: Sun May 26 2024 - 18:53:26 EST


On 07/05/2024 14:50, Hongyan Xia wrote:
> Negative bias is interesting, because dequeuing such a task will
> actually increase utilization.
>
> Solve by applying PELT decay to negative biases as well. This in fact
> can be implemented easily with some math tricks.
>
> Signed-off-by: Hongyan Xia <hongyan.xia2@xxxxxxx>
> ---
> kernel/sched/fair.c | 44 ++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 44 insertions(+)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 0177d7e8f364..7259a61e9ae5 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4863,6 +4863,45 @@ static inline unsigned long task_util_est_uclamp(struct task_struct *p)
> {
> return max(task_util_uclamp(p), _task_util_est_uclamp(p));
> }
> +
> +/*
> + * Negative biases are tricky. If we remove them right away then dequeuing a
> + * uclamp_max task has the interesting effect that dequeuing results in a higher
> + * rq utilization. Solve this by applying PELT decay to the bias itself.
> + *
> + * Keeping track of a PELT-decayed negative bias is extra overhead. However, we
> + * observe this interesting math property, where y is the decay factor and p is
> + * the number of periods elapsed:
> + *
> + * util_new = util_old * y^p - neg_bias * y^p
> + * = (util_old - neg_bias) * y^p
> + *
> + * Therefore, we simply subtract the negative bias from util_avg the moment we
> + * dequeue, then the PELT signal itself is the total of util_avg and the decayed
> + * negative bias, and we no longer need to track the decayed bias separately.
> + */
> +static void propagate_negative_bias(struct task_struct *p)
> +{
> + if (task_util_bias(p) < 0 && !task_on_rq_migrating(p)) {
> + unsigned long neg_bias = -task_util_bias(p);
> + struct sched_entity *se = &p->se;
> + struct cfs_rq *cfs_rq;
> +
> + p->se.avg.util_avg_bias = 0;
> +
> + for_each_sched_entity(se) {
> + u32 divider, neg_sum;
> +
> + cfs_rq = cfs_rq_of(se);
> + divider = get_pelt_divider(&cfs_rq->avg);
> + neg_sum = neg_bias * divider;
> + sub_positive(&se->avg.util_avg, neg_bias);
> + sub_positive(&se->avg.util_sum, neg_sum);
> + sub_positive(&cfs_rq->avg.util_avg, neg_bias);
> + sub_positive(&cfs_rq->avg.util_sum, neg_sum);
> + }
> + }

So you remove the 'task bias = clamp(util_avg, uclamp_min, uclamp_max) -
util_avg' from the se and cfs_rq util_avg' in case it's negative. I.e.
if the task is capped hard.

Looks like this is the old issue that PELT has blocked contribution
whereas uclamp does not (runnable only).

What's the rationale behind this? Is it because the task didn't get the
runtime it needed so we can remove this (artificially accrued) util_avg?

Normally we wouldn't remove blocked util_avg and let it rather decay
periodically for cfs_rq's and at wakeup for tasks.

[...]