Re: [PATCH v5 5/6] sched/fair: Carve out logic to mark a group for asymmetric packing

From: Vincent Guittot
Date: Fri Sep 17 2021 - 11:27:30 EST


On Sat, 11 Sept 2021 at 03:19, Ricardo Neri
<ricardo.neri-calderon@xxxxxxxxxxxxxxx> wrote:
>
> Create a separate function, sched_asym(). A subsequent changeset will
> introduce logic to deal with SMT in conjunction with asmymmetric
> packing. Such logic will need the statistics of the scheduling
> group provided as argument. Update them before calling sched_asym().
>
> Cc: Aubrey Li <aubrey.li@xxxxxxxxx>
> Cc: Ben Segall <bsegall@xxxxxxxxxx>
> Cc: Daniel Bristot de Oliveira <bristot@xxxxxxxxxx>
> Cc: Dietmar Eggemann <dietmar.eggemann@xxxxxxx>
> Cc: Mel Gorman <mgorman@xxxxxxx>
> Cc: Quentin Perret <qperret@xxxxxxxxxx>
> Cc: Rafael J. Wysocki <rafael.j.wysocki@xxxxxxxxx>
> Cc: Srinivas Pandruvada <srinivas.pandruvada@xxxxxxxxxxxxxxx>
> Cc: Steven Rostedt <rostedt@xxxxxxxxxxx>
> Cc: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx>
> Reviewed-by: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx>
> Reviewed-by: Len Brown <len.brown@xxxxxxxxx>
> Co-developed-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
> Signed-off-by: Ricardo Neri <ricardo.neri-calderon@xxxxxxxxxxxxxxx>

Reviewed-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx>

> ---
> Changes since v4:
> * None
>
> Changes since v3:
> * Remove a redundant check for the local group in sched_asym().
> (Dietmar)
> * Reworded commit message for clarity. (Len)
>
> Changes since v2:
> * Introduced this patch.
>
> Changes since v1:
> * N/A
> ---
> kernel/sched/fair.c | 20 +++++++++++++-------
> 1 file changed, 13 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index c5851260b4d8..26db017c14a3 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -8597,6 +8597,13 @@ group_type group_classify(unsigned int imbalance_pct,
> return group_has_spare;
> }
>
> +static inline bool
> +sched_asym(struct lb_env *env, struct sd_lb_stats *sds, struct sg_lb_stats *sgs,
> + struct sched_group *group)
> +{
> + return sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu);
> +}
> +
> /**
> * update_sg_lb_stats - Update sched_group's statistics for load balancing.
> * @env: The load balancing environment.
> @@ -8657,18 +8664,17 @@ static inline void update_sg_lb_stats(struct lb_env *env,
> }
> }
>
> + sgs->group_capacity = group->sgc->capacity;
> +
> + sgs->group_weight = group->group_weight;
> +
> /* Check if dst CPU is idle and preferred to this group */
> if (!local_group && env->sd->flags & SD_ASYM_PACKING &&
> - env->idle != CPU_NOT_IDLE &&
> - sgs->sum_h_nr_running &&
> - sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu)) {
> + env->idle != CPU_NOT_IDLE && sgs->sum_h_nr_running &&
> + sched_asym(env, sds, sgs, group)) {
> sgs->group_asym_packing = 1;
> }
>
> - sgs->group_capacity = group->sgc->capacity;
> -
> - sgs->group_weight = group->group_weight;
> -
> sgs->group_type = group_classify(env->sd->imbalance_pct, group, sgs);
>
> /* Computing avg_load makes sense only when group is overloaded */
> --
> 2.17.1
>