[PATCH 7/7] sched/fair: Set sd->should_idle_balance when misfit
From: Morten Rasmussen
Date: Thu Feb 15 2018 - 11:22:25 EST
From: Valentin Schneider <valentin.schneider@xxxxxxx>
Idle balance is a great opportunity to pull a misfit task. However,
there are scenarios where misfit tasks are present but idle balance is
prevented by the should_idle_balance flag.
A good example of this is a workload of n identical tasks. Let's suppose
we have a 2+2 Arm big.LITTLE system. We then spawn 4 fairly
CPU-intensive tasks - for the sake of simplicity let's say they are just
CPU hogs, even when running on big CPUs.
They are identical tasks, so on an SMP system they should all end at
(roughly) the same time. However, in our case the LITTLE CPUs are less
performing than the big CPUs, so tasks running on the LITTLEs will have
a longer completion time.
This means that the big CPUs will complete their work earlier, at which
point they should pull the tasks from the LITTLEs. What we want to
happen is summarized as follows:
a,b,c,d are our CPU-hogging tasks
_ signifies idling
LITTLE_0 | a a a a _ _
LITTLE_1 | b b b b _ _
---------|-------------
big_0 | c c c c a a
big_1 | d d d d b b
^
^
Tasks end on the big CPUs, idle balance happens
and the misfit tasks are pulled straight away
This however won't happen, because currently the should_idle_balance
flag is only set when there is any CPU that has more than one runnable
task - which may very well not be the case here if our CPU-hogging
workload is all there is to run.
As such, this commit sets the should_idle_balance flag in
update_sg_lb_stats when a group is flagged as having a misfit task.
cc: Ingo Molnar <mingo@xxxxxxxxxx>
cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Signed-off-by: Valentin Schneider <valentin.schneider@xxxxxxx>
Signed-off-by: Morten Rasmussen <morten.rasmussen@xxxxxxx>
---
kernel/sched/fair.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2d2302b7b584..d080a144f87f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7871,8 +7871,10 @@ static inline void update_sg_lb_stats(struct lb_env *env,
sgs->idle_cpus++;
if (env->sd->flags & SD_ASYM_CPUCAPACITY &&
- !sgs->group_misfit_task_load && rq->misfit_task_load)
+ !sgs->group_misfit_task_load && rq->misfit_task_load) {
sgs->group_misfit_task_load = rq->misfit_task_load;
+ *should_idle_balance = true;
+ }
}
/* Adjust by relative CPU capacity of the group */
--
2.7.4