[PATCH v7 01/22] sched: Favour predetermined active CPU as migration destination

From: Will Deacon
Date: Tue May 25 2021 - 11:15:23 EST


Since commit 6d337eab041d ("sched: Fix migrate_disable() vs
set_cpus_allowed_ptr()"), the migration stopper thread is left to
determine the destination CPU of the running task being migrated, even
though set_cpus_allowed_ptr() already identified a candidate target
earlier on.

Unfortunately, the stopper doesn't check whether or not the new
destination CPU is active or not, so __migrate_task() can leave the task
sitting on a CPU that is outside of its affinity mask, even if the CPU
originally chosen by SCA is still active.

For example, with CONFIG_CPUSET=n:

$ taskset -pc 0-2 $PID
# offline CPUs 3-4
$ taskset -pc 3-5 $PID

Then $PID remains on its current CPU (one of 0-2) and does not get
migrated to CPU 5.

Rework 'struct migration_arg' so that an optional pointer to an affinity
mask can be provided to the stopper, allowing us to respect the
original choice of destination CPU when migrating. Note that there is
still the potential to race with a concurrent CPU hot-unplug of the
destination CPU if the caller does not hold the hotplug lock.

Reported-by: Valentin Schneider <valentin.schneider@xxxxxxx>
Signed-off-by: Will Deacon <will@xxxxxxxxxx>
---
kernel/sched/core.c | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 5226cc26a095..1702a60d178d 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1869,6 +1869,7 @@ static struct rq *move_queued_task(struct rq *rq, struct rq_flags *rf,
struct migration_arg {
struct task_struct *task;
int dest_cpu;
+ const struct cpumask *dest_mask;
struct set_affinity_pending *pending;
};

@@ -1917,6 +1918,7 @@ static int migration_cpu_stop(void *data)
struct set_affinity_pending *pending = arg->pending;
struct task_struct *p = arg->task;
int dest_cpu = arg->dest_cpu;
+ const struct cpumask *dest_mask = arg->dest_mask;
struct rq *rq = this_rq();
bool complete = false;
struct rq_flags rf;
@@ -1956,12 +1958,8 @@ static int migration_cpu_stop(void *data)
complete = true;
}

- if (dest_cpu < 0) {
- if (cpumask_test_cpu(task_cpu(p), &p->cpus_mask))
- goto out;
-
- dest_cpu = cpumask_any_distribute(&p->cpus_mask);
- }
+ if (dest_mask && (cpumask_test_cpu(task_cpu(p), dest_mask)))
+ goto out;

if (task_on_rq_queued(p))
rq = __migrate_task(rq, &rf, p, dest_cpu);
@@ -2249,7 +2247,8 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
init_completion(&my_pending.done);
my_pending.arg = (struct migration_arg) {
.task = p,
- .dest_cpu = -1, /* any */
+ .dest_cpu = dest_cpu,
+ .dest_mask = &p->cpus_mask,
.pending = &my_pending,
};

--
2.31.1.818.g46aad6cb9e-goog