Re: [PATCH 6/6] sched: Simplify set_affinity_pending refcounts

From: Peter Zijlstra
Date: Thu Feb 25 2021 - 04:29:04 EST


On Wed, Feb 24, 2021 at 05:59:01PM +0000, Valentin Schneider wrote:

> Your change reinstores the "triple SCA" pattern, where a stopper can run
> with arg->pending && arg->pending != p->migration_pending, which I was
> kinda happy to see go away...

Right, fair enough. Any workload that can tell the difference is doing
it wrong anyway :-)

OK, I've munged your two patches together into the below.

---
Subject: sched: Simplify migration_cpu_stop()
From: Valentin Schneider <valentin.schneider@xxxxxxx>
Date: Thu Feb 25 10:22:30 CET 2021

Since, when ->stop_pending, only the stopper can uninstall
p->migration_pending. This could simplify a few ifs, because:

(pending != NULL) => (pending == p->migration_pending)

Also, the fatty comment above affine_move_task() probably needs a bit
of gardening.

Signed-off-by: Valentin Schneider <valentin.schneider@xxxxxxx>
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
---
kernel/sched/core.c | 27 ++++++++++++++++++---------
1 file changed, 18 insertions(+), 9 deletions(-)

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1927,6 +1927,12 @@ static int migration_cpu_stop(void *data
rq_lock(rq, &rf);

/*
+ * If we were passed a pending, then ->stop_pending was set, thus
+ * p->migration_pending must have remained stable.
+ */
+ WARN_ON_ONCE(pending && pending != p->migration_pending);
+
+ /*
* If task_rq(p) != rq, it cannot be migrated here, because we're
* holding rq->lock, if p->on_rq == 0 it cannot get enqueued because
* we're holding p->pi_lock.
@@ -1936,8 +1942,7 @@ static int migration_cpu_stop(void *data
goto out;

if (pending) {
- if (p->migration_pending == pending)
- p->migration_pending = NULL;
+ p->migration_pending = NULL;
complete = true;
}

@@ -1976,8 +1981,7 @@ static int migration_cpu_stop(void *data
* somewhere allowed, we're done.
*/
if (cpumask_test_cpu(task_cpu(p), p->cpus_ptr)) {
- if (p->migration_pending == pending)
- p->migration_pending = NULL;
+ p->migration_pending = NULL;
complete = true;
goto out;
}
@@ -2165,16 +2169,21 @@ void do_set_cpus_allowed(struct task_str
*
* (1) In the cases covered above. There is one more where the completion is
* signaled within affine_move_task() itself: when a subsequent affinity request
- * cancels the need for an active migration. Consider:
+ * occurs after the stopper bailed out due to the targeted task still being
+ * Migrate-Disable. Consider:
*
* Initial conditions: P0->cpus_mask = [0, 1]
*
- * P0@CPU0 P1 P2
- *
- * migrate_disable();
- * <preempted>
+ * CPU0 P1 P2
+ * <P0>
+ * migrate_disable();
+ * <preempted>
* set_cpus_allowed_ptr(P0, [1]);
* <blocks>
+ * <migration/0>
+ * migration_cpu_stop()
+ * is_migration_disabled()
+ * <bails>
* set_cpus_allowed_ptr(P0, [0, 1]);
* <signal completion>
* <awakes>