Re: [PATCH] sched/fair: Add SMT4 group_smt_balance handling

From: Peter Zijlstra
Date: Tue Sep 05 2023 - 12:51:04 EST


On Thu, Jul 27, 2023 at 06:32:44AM -0700, Tim Chen wrote:

> kernel/sched/fair.c | 18 ++++++++++++++++--
> 1 file changed, 16 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index a87988327f88..566686c5f2bd 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -9563,7 +9563,7 @@ static inline long sibling_imbalance(struct lb_env *env,
> imbalance /= ncores_local + ncores_busiest;
>
> /* Take advantage of resource in an empty sched group */
> - if (imbalance == 0 && local->sum_nr_running == 0 &&
> + if (imbalance <= 1 && local->sum_nr_running == 0 &&
> busiest->sum_nr_running > 1)
> imbalance = 2;
>
> @@ -9751,6 +9751,20 @@ static bool update_sd_pick_busiest(struct lb_env *env,
> break;
>
> case group_smt_balance:
> + /* no idle cpus on both groups handled by group_fully_busy below */
> + if (sgs->idle_cpus != 0 || busiest->idle_cpus != 0) {
> + if (sgs->idle_cpus > busiest->idle_cpus)
> + return false;
> + if (sgs->idle_cpus < busiest->idle_cpus)
> + return true;
> + if (sgs->sum_nr_running <= busiest->sum_nr_running)
> + return false;
> + else
> + return true;
> + }
> + goto fully_busy;
> + break;

This is really daft; why can't this simply be: fallthrough; ? At the
very least that break must go.

> +
> case group_fully_busy:
> /*
> * Select the fully busy group with highest avg_load. In
> @@ -9763,7 +9777,7 @@ static bool update_sd_pick_busiest(struct lb_env *env,
> * select the 1st one, except if @sg is composed of SMT
> * siblings.
> */
> -
> +fully_busy:
> if (sgs->avg_load < busiest->avg_load)
> return false;
>
> --
> 2.32.0
>
>