Re: [PATCH v2 1/2] sched/fair: Add EAS checks before updating overutilized

From: Shrikanth Hegde
Date: Wed Feb 28 2024 - 23:30:56 EST




On 2/29/24 5:04 AM, Dietmar Eggemann wrote:
> On 28/02/2024 18:24, Shrikanth Hegde wrote:
>

Thank you Dietmar, for taking a look.

> [...]
>
>> But we will do some extra computation currently and then not use it if it
>> Non-EAS case in update_sg_lb_stats
>>
>> Would something like this makes sense?
>> @@ -9925,7 +9925,7 @@ static inline void update_sg_lb_stats(struct lb_env *env,
>> if (nr_running > 1)
>> *sg_status |= SG_OVERLOAD;
>>
>> - if (cpu_overutilized(i))
>> + if (sched_energy_enabled() && cpu_overutilized(i))
>> *sg_status |= SG_OVERUTILIZED;
>
> Yes, we could also disable the setting of OU in load_balance in the none
> !EAS case.
>
> [...]

Ok. I will add this change. I don't see any other place where we need to do EAS
check w.r.t to overutilized. This should cover all cases then.

>
>>> NIT:
>>> When called from check_update_overutilized_status(),
>>> sched_energy_enabled() will be checked twice.
>> Yes.
>> But, I think that's okay since it is a static branch check at best.
>> This way it keeps the code simpler.
>
> You could keep the ched_energy_enabled() outside of the new
> set_overutilized_status() to avoid this:
>
> -->8--

Ok. We can do this as well. I will incorporate this and send out v3 soon.


>
> ---
> kernel/sched/fair.c | 32 ++++++++++++++++++--------------
> 1 file changed, 18 insertions(+), 14 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 32bc98d9123d..c82164bf45f3 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6676,12 +6676,19 @@ static inline bool cpu_overutilized(int cpu)
> return !util_fits_cpu(cpu_util_cfs(cpu), rq_util_min, rq_util_max, cpu);
> }
>
> +static inline void set_overutilized_status(struct rq *rq, unsigned int val)
> +{
> + WRITE_ONCE(rq->rd->overutilized, val);
> + trace_sched_overutilized_tp(rq->rd, val);
> +}
> +
> static inline void update_overutilized_status(struct rq *rq)
> {
> - if (!READ_ONCE(rq->rd->overutilized) && cpu_overutilized(rq->cpu)) {
> - WRITE_ONCE(rq->rd->overutilized, SG_OVERUTILIZED);
> - trace_sched_overutilized_tp(rq->rd, SG_OVERUTILIZED);
> - }
> + if (!sched_energy_enabled())
> + return;
> +
> + if (!READ_ONCE(rq->rd->overutilized) && cpu_overutilized(rq->cpu))
> + set_overutilized_status(rq, SG_OVERUTILIZED);
> }
> #else
> static inline void update_overutilized_status(struct rq *rq) { }
> @@ -10755,19 +10762,16 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
> env->fbq_type = fbq_classify_group(&sds->busiest_stat);
>
> if (!env->sd->parent) {
> - struct root_domain *rd = env->dst_rq->rd;
> -
> /* update overload indicator if we are at root domain */
> - WRITE_ONCE(rd->overload, sg_status & SG_OVERLOAD);
> + WRITE_ONCE(env->dst_rq->rd->overload, sg_status & SG_OVERLOAD);
>
> /* Update over-utilization (tipping point, U >= 0) indicator */
> - WRITE_ONCE(rd->overutilized, sg_status & SG_OVERUTILIZED);
> - trace_sched_overutilized_tp(rd, sg_status & SG_OVERUTILIZED);
> - } else if (sg_status & SG_OVERUTILIZED) {
> - struct root_domain *rd = env->dst_rq->rd;
> -
> - WRITE_ONCE(rd->overutilized, SG_OVERUTILIZED);
> - trace_sched_overutilized_tp(rd, SG_OVERUTILIZED);
> + if (sched_energy_enabled()) {
> + set_overutilized_status(env->dst_rq,
> + sg_status & SG_OVERUTILIZED);
> + }
> + } else if (sched_energy_enabled() && sg_status & SG_OVERUTILIZED) {
> + set_overutilized_status(env->dst_rq, SG_OVERUTILIZED);
> }
>
> update_idle_cpu_scan(env, sum_util);