Re: [PATCH v2 4/4] sched/fair: Prevent active LB from preempting higher sched classes
From: Vincent Guittot
Date: Tue Aug 27 2019 - 08:29:01 EST
On Thu, 15 Aug 2019 at 16:52, Valentin Schneider
<valentin.schneider@xxxxxxx> wrote:
>
> The CFS load balancer can cause the cpu_stopper to run a function to
> try and steal a remote rq's running task. However, it so happens
> that while only CFS tasks will ever be migrated by that function, we
> can end up preempting higher sched class tasks, since it is executed
> by the cpu_stopper.
>
> This can potentially occur whenever a rq's running task is > CFS but
> the rq has runnable CFS tasks.
>
> Check the sched class of the remote rq's running task after we've
> grabbed its lock. If it's CFS, carry on, otherwise run
> detach_one_task() locally since we don't need the cpu_stopper (that
> !CFS task is doing the exact same thing).
AFAICT, this doesn't prevent from preempting !CFS task but only reduce
the window.
As soon as you unlock, !CFS task can preempt CFS before you start stop thread
testing busiest->cfs.h_nr_running < 1 and/or
busiest->curr->sched_class != &fair_sched_class
in need_active_balance() will do almost the same and is much simpler
than this patchset IMO.
>
> Signed-off-by: Valentin Schneider <valentin.schneider@xxxxxxx>
> ---
> kernel/sched/fair.c | 15 ++++++++++++++-
> 1 file changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 8f5f6cad5008..bf4f7f43252f 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -8759,6 +8759,7 @@ static inline enum alb_status active_load_balance(struct lb_env *env)
> enum alb_status status = cancelled;
> struct sched_domain *sd = env->sd;
> struct rq *busiest = env->src_rq;
> + struct task_struct *p = NULL;
> unsigned long flags;
>
> schedstat_inc(sd->lb_failed[env->idle]);
> @@ -8780,6 +8781,16 @@ static inline enum alb_status active_load_balance(struct lb_env *env)
> if (busiest->cfs.h_nr_running < 1)
> goto unlock;
>
> + /*
> + * If busiest->curr isn't CFS, then there's no point in running the
> + * cpu_stopper. We can try to nab one CFS task though, since they're
> + * all ready to be plucked.
> + */
> + if (busiest->curr->sched_class != &fair_sched_class) {
> + p = detach_one_task(env);
> + goto unlock;
> + }
> +
> /*
> * Don't kick the active_load_balance_cpu_stop, if the curr task on
> * busiest CPU can't be moved to dst_cpu:
> @@ -8803,7 +8814,9 @@ static inline enum alb_status active_load_balance(struct lb_env *env)
> unlock:
> raw_spin_unlock_irqrestore(&busiest->lock, flags);
>
> - if (status == started)
> + if (p)
> + attach_one_task(env->dst_rq, p);
> + else if (status == started)
> stop_one_cpu_nowait(cpu_of(busiest),
> active_load_balance_cpu_stop, busiest,
> &busiest->active_balance_work);
> --
> 2.22.0
>