Re: [PATCH 8/8] sched/fair: Relax task_hot() for misfit tasks
From: Vincent Guittot
Date: Tue Feb 09 2021 - 04:01:01 EST
On Mon, 8 Feb 2021 at 19:24, Valentin Schneider
<valentin.schneider@xxxxxxx> wrote:
>
> On 08/02/21 17:21, Vincent Guittot wrote:
> > On Thu, 28 Jan 2021 at 19:32, Valentin Schneider
> > <valentin.schneider@xxxxxxx> wrote:
> >>
> >> Misfit tasks can and will be preempted by the stopper to migrate them over
> >> to a higher-capacity CPU. However, when runnable but not current misfit
> >> tasks are scanned by the load balancer (i.e. detach_tasks()), the
> >> task_hot() ratelimiting logic may prevent us from enqueuing said task onto
> >> a higher-capacity CPU.
> >>
> >> Align detach_tasks() with the active-balance logic and let it pick a
> >> cache-hot misfit task when the destination CPU can provide a capacity
> >> uplift.
> >>
> >> Signed-off-by: Valentin Schneider <valentin.schneider@xxxxxxx>
> >> ---
> >> kernel/sched/fair.c | 11 +++++++++++
> >> 1 file changed, 11 insertions(+)
> >>
> >> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> >> index cba9f97d9beb..c2351b87824f 100644
> >> --- a/kernel/sched/fair.c
> >> +++ b/kernel/sched/fair.c
> >> @@ -7484,6 +7484,17 @@ static int task_hot(struct task_struct *p, struct lb_env *env)
> >> if (env->sd->flags & SD_SHARE_CPUCAPACITY)
> >> return 0;
> >>
> >> + /*
> >> + * On a (sane) asymmetric CPU capacity system, the increase in compute
> >> + * capacity should offset any potential performance hit caused by a
> >> + * migration.
> >> + */
> >> + if (sd_has_asym_cpucapacity(env->sd) &&
> >> + env->idle != CPU_NOT_IDLE &&
> >> + !task_fits_capacity(p, capacity_of(env->src_cpu)) &&
> >> + cpu_capacity_greater(env->dst_cpu, env->src_cpu))
> >
> > Why not using env->migration_type to directly detect that it's a
> > misfit task active migration ?
> >
>
> This is admittedly a kludge. Consider the scenario described in patch 7/8,
> i.e.:
> - there's a misfit task running on a LITTLE CPU
> - a big CPU completes its work and is about to go through newidle_balance()
>
> Now, consider by the time that big CPU gets into load_balance(), the misfit
> task on the LITTLE CPU got preempted by a percpu kworker. As of right now,
> it's quite likely the imbalance won't be classified as group_misfit_task,
> but as group_overloaded (depends on the topology / workload, but that's a
> symptom I've been seeing).
IIRC, we already discussed this. And you should better track that a
misfit task remains on the rq instead of adding a lot of special case
everywhere
>
> Unfortunately, even if we e.g. change the misfit load-balance logic to also
> track preempted misfit tasks (rather than just the rq's current), this
> could still happen AFAICT.
>
> Long story short, we already trigger an active-balance to upmigrate running
> misfit tasks, this changes task_hot() to allow any preempted task that
> doesn't fit on its CPU to be upmigrated (regardless of the imbalance
> classification).
>
> >> + return 0;
> >> +
> >> /*
> >> * Buddy candidates are cache hot:
> >> */
> >> --
> >> 2.27.0
> >>