On Thu, May 18, 2017 at 01:36:01PM -0600, Jeffrey Hugo wrote:We have looked through and agree with your proposed change; however, we would still need to mask out the dst_cpu when considering the redo path. We will include this modification in the next patch set.
The group_imbalance path correctly sets the flagSo its been a while since I looked at any of this, but from a quick
to indicate the group can not be properly balanced due to affinity, but the
redo condition right after this branch incorrectly assumes that there may
be other cores with work to be pulled by considering cores outside of the
scheduling domain in question.
look, env->cpus appears to only be applied to group/balance masks.
In which case, we can easily do something like the below. Did I miss
something?
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 219fe58e3023..1724e4433f89 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8104,7 +8104,7 @@ static int load_balance(int this_cpu, struct rq *this_rq,
if (idle == CPU_NEWLY_IDLE)
env.dst_grpmask = NULL;
- cpumask_copy(cpus, cpu_active_mask);
+ cpumask_and(cpus, sched_domain_span(sd), cpu_active_mask);
schedstat_inc(sd->lb_count[idle]);