Re: [PATCH 2/4] sched/fair: Introduce arch_sched_asym_prefer_early()

From: Vincent Guittot
Date: Tue Apr 06 2021 - 10:31:50 EST


On Tue, 6 Apr 2021 at 06:11, Ricardo Neri
<ricardo.neri-calderon@xxxxxxxxxxxxxxx> wrote:
>
> Introduce arch_sched_asym_prefer_early() so that architectures with SMT
> can delay the decision to label a candidate busiest group as
> group_asym_packing.
>
> When using asymmetric packing, high priority idle CPUs pull tasks from
> scheduling groups with low priority CPUs. The decision on using asymmetric
> packing for load balancing is done after collecting the statistics of a
> candidate busiest group. However, this decision needs to consider the
> state of SMT siblings of dst_cpu.
>
> Cc: Aubrey Li <aubrey.li@xxxxxxxxx>
> Cc: Ben Segall <bsegall@xxxxxxxxxx>
> Cc: Daniel Bristot de Oliveira <bristot@xxxxxxxxxx>
> Cc: Dietmar Eggemann <dietmar.eggemann@xxxxxxx>
> Cc: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx>
> Cc: Mel Gorman <mgorman@xxxxxxx>
> Cc: Quentin Perret <qperret@xxxxxxxxxx>
> Cc: Srinivas Pandruvada <srinivas.pandruvada@xxxxxxxxxxxxxxx>
> Cc: Steven Rostedt <rostedt@xxxxxxxxxxx>
> Cc: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx>
> Reviewed-by: Len Brown <len.brown@xxxxxxxxx>
> Signed-off-by: Ricardo Neri <ricardo.neri-calderon@xxxxxxxxxxxxxxx>
> ---
> include/linux/sched/topology.h | 1 +
> kernel/sched/fair.c | 11 ++++++++++-
> 2 files changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
> index 8f0f778b7c91..663b98959305 100644
> --- a/include/linux/sched/topology.h
> +++ b/include/linux/sched/topology.h
> @@ -57,6 +57,7 @@ static inline int cpu_numa_flags(void)
> #endif
>
> extern int arch_asym_cpu_priority(int cpu);
> +extern bool arch_sched_asym_prefer_early(int a, int b);
>
> struct sched_domain_attr {
> int relax_domain_level;
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 4ef3fa0d5e8d..e74da853b046 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -106,6 +106,15 @@ int __weak arch_asym_cpu_priority(int cpu)
> return -cpu;
> }
>
> +/*
> + * For asym packing, early check if CPUs with higher priority should be
> + * preferred. On some architectures, more data is needed to make a decision.
> + */
> +bool __weak arch_sched_asym_prefer_early(int a, int b)
> +{
> + return sched_asym_prefer(a, b);
> +}
> +
> /*
> * The margin used when comparing utilization with CPU capacity.
> *
> @@ -8458,7 +8467,7 @@ static inline void update_sg_lb_stats(struct lb_env *env,
> if (!local_group && env->sd->flags & SD_ASYM_PACKING &&
> env->idle != CPU_NOT_IDLE &&
> sgs->sum_h_nr_running &&
> - sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu)) {
> + arch_sched_asym_prefer_early(env->dst_cpu, group->asym_prefer_cpu)) {

If itmt set arch_sched_asym_prefer_early to true all groups will be
set as group_asym_packing unconditionally which is wrong. The state
has to be set only when we want asym packing migration

> sgs->group_asym_packing = 1;
> }
>
> --
> 2.17.1
>