Re: [PATCH 3/3] sched/fair: reduce cases for active balance

From: Vincent Guittot
Date: Wed Jan 06 2021 - 10:42:14 EST


On Wed, 6 Jan 2021 at 16:13, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>
> On Wed, Jan 06, 2021 at 02:34:19PM +0100, Vincent Guittot wrote:
> > Active balance is triggered for a number of voluntary case like misfit or
> cases
> > pinned tasks cases but also after that a number of load balance failed to
> ^attempts
> > migrate a task. Remove the active load balance case for overloaded group
> ^an ?
> > as an overloaded state means that there is at least one waiting tasks. The
> task
> > threshold on the upper limit of the task's load will decrease with the
> > number of failed LB until the task has migrated.
>
> And I'm not sure I follow that last part, irrespective of spelling nits,
> help?

Argh, come back to work is difficult for me

Let me try again:

Active balance is triggered for a number of voluntary cases like
misfit or pinned tasks cases but also after that a number of load
balance attempts failed to migrate a task. There is no need to use
active load balance when the group is overloaded because an overloaded
state means that there is at least one waiting task. Nevertheless, the
waiting task is not selected and detached until the threshold becomes
higher than its load. This threshold increases with the number of
failed lb (see the condition if ((load >> env->sd->nr_balance_failed)
> env->imbalance) in detach_tasks()) and the waiting task will end up
to be selected after a number of attempts.

>
> > Signed-off-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
> > ---
> > kernel/sched/fair.c | 43 +++++++++++++++++++++----------------------
> > 1 file changed, 21 insertions(+), 22 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 69a455113b10..ee87fd6f7359 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -9499,13 +9499,30 @@ asym_active_balance(struct lb_env *env)
> > }
> >
> > static inline bool
> > -voluntary_active_balance(struct lb_env *env)
> > +imbalanced_active_balance(struct lb_env *env)
> > +{
> > + struct sched_domain *sd = env->sd;
> > +
> > + /* The imbalanced case includes the case of pinned tasks preventing a fair
> > + * distribution of the load on the system but also the even distribution of the
> > + * threads on a system with spare capacity
> > + */
>
> comment style fail
>
> > + if ((env->migration_type == migrate_task) &&
> > + (sd->nr_balance_failed > sd->cache_nice_tries+2))
>
> indent fail; try: set cino=(0:0
>
> > + return 1;
> > +
> > + return 0;
> > +}
> > +
> > +static int need_active_balance(struct lb_env *env)
> > {
> > struct sched_domain *sd = env->sd;
> >
> > if (asym_active_balance(env))
> > return 1;
> >
> > + if (imbalanced_active_balance(env))
> > + return 1;
>
> + whitespace
>
> > /*
> > * The dst_cpu is idle and the src_cpu CPU has only 1 CFS task.
> > * It's worth migrating the task if the src_cpu's capacity is reduced