Re: [PATCH v3 04/10] sched/fair: Let low-priority cores help high-priority busy SMT cores

From: Ricardo Neri
Date: Tue Mar 14 2023 - 19:44:27 EST


On Thu, Mar 09, 2023 at 04:51:35PM -0800, Tim Chen wrote:
> On Mon, 2023-02-06 at 20:58 -0800, Ricardo Neri wrote:
> > Using asym_packing priorities within an SMT core is straightforward.
> > Just
> > follow the priorities that hardware indicates.
> >
> > When balancing load from an SMT core, also consider the idle of its
> > siblings. Priorities do not reflect that an SMT core divides its
> > throughput
> > among all its busy siblings. They only makes sense when exactly one
> > sibling
> > is busy.
> >
> > Indicate that active balance is needed if the destination CPU has
> > lower
> > priority than the source CPU but the latter has busy SMT siblings.
> >
> > Make find_busiest_queue() not skip higher-priority SMT cores with
> > more than
> > busy sibling.
> >
> > Cc: Ben Segall <bsegall@xxxxxxxxxx>
> > Cc: Daniel Bristot de Oliveira <bristot@xxxxxxxxxx>
> > Cc: Dietmar Eggemann <dietmar.eggemann@xxxxxxx>
> > Cc: Len Brown <len.brown@xxxxxxxxx>
> > Cc: Mel Gorman <mgorman@xxxxxxx>
> > Cc: Rafael J. Wysocki <rafael.j.wysocki@xxxxxxxxx>
> > Cc: Srinivas Pandruvada <srinivas.pandruvada@xxxxxxxxxxxxxxx>
> > Cc: Steven Rostedt <rostedt@xxxxxxxxxxx>
> > Cc: Tim C. Chen <tim.c.chen@xxxxxxxxx>
> > Cc: Valentin Schneider <vschneid@xxxxxxxxxx>
> > Cc: x86@xxxxxxxxxx
> > Cc: linux-kernel@xxxxxxxxxxxxxxx
> > Suggested-by: Valentin Schneider <vschneid@xxxxxxxxxx>
> > Signed-off-by: Ricardo Neri <ricardo.neri-calderon@xxxxxxxxxxxxxxx>
> > ---
> > Changes since v2:
> >  * Introduced this patch.
> >
> > Changes since v1:
> >  * N/A
> > ---
> >  kernel/sched/fair.c | 31 ++++++++++++++++++++++++++-----
> >  1 file changed, 26 insertions(+), 5 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 80c86462c6f6..c9d0ddfd11f2 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -10436,11 +10436,20 @@ static struct rq *find_busiest_queue(struct
> > lb_env *env,
> >                     nr_running == 1)
> >                         continue;
> >  
> > -               /* Make sure we only pull tasks from a CPU of lower
> > priority */
> > +               /*
> > +                * Make sure we only pull tasks from a CPU of lower
> > priority
> > +                * when balancing between SMT siblings.
> > +                *
> > +                * If balancing between cores, let lower priority
> > CPUs help
> > +                * SMT cores with more than one busy sibling.
> > +                */
> >                 if ((env->sd->flags & SD_ASYM_PACKING) &&
> >                     sched_asym_prefer(i, env->dst_cpu) &&
> > -                   nr_running == 1)
> > -                       continue;
> > +                   nr_running == 1) {
> > +                       if (env->sd->flags & SD_SHARE_CPUCAPACITY ||
> > +                           (!(env->sd->flags & SD_SHARE_CPUCAPACITY)
> > && is_core_idle(i)))
> > +                               continue;
> > +               }
> >  
> >                 switch (env->migration_type) {
> >                 case migrate_load:
> > @@ -10530,8 +10539,20 @@ asym_active_balance(struct lb_env *env)
> >          * lower priority CPUs in order to pack all tasks in the
> >          * highest priority CPUs.
> >          */
> > -       return env->idle != CPU_NOT_IDLE && (env->sd->flags &
> > SD_ASYM_PACKING) &&
> > -              sched_asym_prefer(env->dst_cpu, env->src_cpu);
> > +       if (env->idle != CPU_NOT_IDLE && (env->sd->flags &
> > SD_ASYM_PACKING)) {
> > +               /* Always obey priorities between SMT siblings. */
> > +               if (env->sd->flags & SD_SHARE_CPUCAPACITY)
> > +                       return sched_asym_prefer(env->dst_cpu, env-
> > >src_cpu);
> > +
> > +               /*
> > +                * A lower priority CPU can help an SMT core with
> > more than one
> > +                * busy sibling.
> > +                */
> > +               return sched_asym_prefer(env->dst_cpu, env->src_cpu)
> > ||
> > +                      !is_core_idle(env->src_cpu);
> > +       }
>
> Suppose we have the Atom cores in a sched group (e.g. a cluster),
> this will pull the tasks from those core to a SMT thread even if
> its sibling thread is busy. Suggest this change
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index da1afa99cd55..b671cb0d7ab3 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -10473,9 +10473,11 @@ asym_active_balance(struct lb_env *env)
> /*
> * A lower priority CPU can help an SMT core with more than one
> * busy sibling.
> + * Pull only if no SMT sibling busy.
> */
> - return sched_asym_prefer(env->dst_cpu, env->src_cpu) ||
> - !is_core_idle(env->src_cpu);
> + if (is_core_idle(env->dst_cpu))
> + return sched_asym_prefer(env->dst_cpu, env->src_cpu) ||
> + !is_core_idle(env->src_cpu);

Thank you Tim! Patch 3 does this check for asym_packing, but we could land
from other types of idle load balancing.

I wil integrate this change to the series.

Thanks and BR,
Ricardo