On 12/05/17 21:57, Jeffrey Hugo wrote:
On 5/12/2017 2:47 PM, Peter Zijlstra wrote:
On Fri, May 12, 2017 at 11:01:37AM -0600, Jeffrey Hugo wrote:
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d711093..8f783ba 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8219,8 +8219,19 @@ static int load_balance(int this_cpu, struct
rq *this_rq,
/* All tasks on this runqueue were pinned by CPU affinity */
if (unlikely(env.flags & LBF_ALL_PINNED)) {
+ struct cpumask tmp;
You cannot have cpumask's on stack.
Well, we need a temp variable to store the intermediate values since the
cpumask_* operations are somewhat limited, and require a "storage"
parameter.
Do you have any suggestions to meet all of these requirements?
What about we use env.dst_grpmask and check if cpus is an improper
subset of env.dst_grpmask? In this case we have to get rid of
setting env.dst_grpmask = NULL in case of CPU_NEWLY_IDLE which is
IMHO not an issue since it's idle is passed via env into
can_migrate_task().
And cpus has to be and'ed with sched_domain_span(env.sd).
I'm not sure if this will work with 'not fully connected NUMA' (SD_OVERLAP)
though ...