[tip: sched/core] sched/locking: Add special p->blocked_on==PROXY_WAKING value for proxy return-migration

From: tip-bot2 for John Stultz

Date: Fri Apr 03 2026 - 08:34:26 EST


The following commit has been merged into the sched/core branch of tip:

Commit-ID: 2d7622669836dcbbb449741b4e6c503ffe005c25
Gitweb: https://git.kernel.org/tip/2d7622669836dcbbb449741b4e6c503ffe005c25
Author: John Stultz <jstultz@xxxxxxxxxx>
AuthorDate: Tue, 24 Mar 2026 19:13:21
Committer: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
CommitterDate: Fri, 03 Apr 2026 14:23:40 +02:00

sched/locking: Add special p->blocked_on==PROXY_WAKING value for proxy return-migration

As we add functionality to proxy execution, we may migrate a
donor task to a runqueue where it can't run due to cpu affinity.
Thus, we must be careful to ensure we return-migrate the task
back to a cpu in its cpumask when it becomes unblocked.

Peter helpfully provided the following example with pictures:
"Suppose we have a ww_mutex cycle:

,-+-* Mutex-1 <-.
Task-A ---' | | ,-- Task-B
`-> Mutex-2 *-+-'

Where Task-A holds Mutex-1 and tries to acquire Mutex-2, and
where Task-B holds Mutex-2 and tries to acquire Mutex-1.

Then the blocked_on->owner chain will go in circles.

Task-A -> Mutex-2
^ |
| v
Mutex-1 <- Task-B

We need two things:

- find_proxy_task() to stop iterating the circle;

- the woken task to 'unblock' and run, such that it can
back-off and re-try the transaction.

Now, the current code [without this patch] does:
__clear_task_blocked_on();
wake_q_add();

And surely clearing ->blocked_on is sufficient to break the
cycle.

Suppose it is Task-B that is made to back-off, then we have:

Task-A -> Mutex-2 -> Task-B (no further blocked_on)

and it would attempt to run Task-B. Or worse, it could directly
pick Task-B and run it, without ever getting into
find_proxy_task().

Now, here is a problem because Task-B might not be runnable on
the CPU it is currently on; and because !task_is_blocked() we
don't get into the proxy paths, so nobody is going to fix this
up.

Ideally we would have dequeued Task-B alongside of clearing
->blocked_on, but alas, [the lock ordering prevents us from
getting the task_rq_lock() and] spoils things."

Thus we need more than just a binary concept of the task being
blocked on a mutex or not.

So allow setting blocked_on to PROXY_WAKING as a special value
which specifies the task is no longer blocked, but needs to
be evaluated for return migration *before* it can be run.

This will then be used in a later patch to handle proxy
return-migration.

Signed-off-by: John Stultz <jstultz@xxxxxxxxxx>
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Reviewed-by: K Prateek Nayak <kprateek.nayak@xxxxxxx>
Link: https://patch.msgid.link/20260324191337.1841376-7-jstultz@xxxxxxxxxx
---
include/linux/sched.h | 51 ++++++++++++++++++++++++++++++++++++--
kernel/locking/mutex.c | 2 +-
kernel/locking/ww_mutex.h | 16 ++++++------
kernel/sched/core.c | 16 ++++++++++++-
4 files changed, 74 insertions(+), 11 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 2eef9bc..8ec3b6d 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2180,10 +2180,20 @@ extern int __cond_resched_rwlock_write(rwlock_t *lock) __must_hold(lock);
})

#ifndef CONFIG_PREEMPT_RT
+
+/*
+ * With proxy exec, if a task has been proxy-migrated, it may be a donor
+ * on a cpu that it can't actually run on. Thus we need a special state
+ * to denote that the task is being woken, but that it needs to be
+ * evaluated for return-migration before it is run. So if the task is
+ * blocked_on PROXY_WAKING, return migrate it before running it.
+ */
+#define PROXY_WAKING ((struct mutex *)(-1L))
+
static inline struct mutex *__get_task_blocked_on(struct task_struct *p)
{
lockdep_assert_held_once(&p->blocked_lock);
- return p->blocked_on;
+ return p->blocked_on == PROXY_WAKING ? NULL : p->blocked_on;
}

static inline void __set_task_blocked_on(struct task_struct *p, struct mutex *m)
@@ -2211,7 +2221,7 @@ static inline void __clear_task_blocked_on(struct task_struct *p, struct mutex *
* blocked_on relationships, but make sure we are not
* clearing the relationship with a different lock.
*/
- WARN_ON_ONCE(m && p->blocked_on && p->blocked_on != m);
+ WARN_ON_ONCE(m && p->blocked_on && p->blocked_on != m && p->blocked_on != PROXY_WAKING);
p->blocked_on = NULL;
}

@@ -2220,6 +2230,35 @@ static inline void clear_task_blocked_on(struct task_struct *p, struct mutex *m)
guard(raw_spinlock_irqsave)(&p->blocked_lock);
__clear_task_blocked_on(p, m);
}
+
+static inline void __set_task_blocked_on_waking(struct task_struct *p, struct mutex *m)
+{
+ /* Currently we serialize blocked_on under the task::blocked_lock */
+ lockdep_assert_held_once(&p->blocked_lock);
+
+ if (!sched_proxy_exec()) {
+ __clear_task_blocked_on(p, m);
+ return;
+ }
+
+ /* Don't set PROXY_WAKING if blocked_on was already cleared */
+ if (!p->blocked_on)
+ return;
+ /*
+ * There may be cases where we set PROXY_WAKING on tasks that were
+ * already set to waking, but make sure we are not changing
+ * the relationship with a different lock.
+ */
+ WARN_ON_ONCE(m && p->blocked_on != m && p->blocked_on != PROXY_WAKING);
+ p->blocked_on = PROXY_WAKING;
+}
+
+static inline void set_task_blocked_on_waking(struct task_struct *p, struct mutex *m)
+{
+ guard(raw_spinlock_irqsave)(&p->blocked_lock);
+ __set_task_blocked_on_waking(p, m);
+}
+
#else
static inline void __clear_task_blocked_on(struct task_struct *p, struct rt_mutex *m)
{
@@ -2228,6 +2267,14 @@ static inline void __clear_task_blocked_on(struct task_struct *p, struct rt_mute
static inline void clear_task_blocked_on(struct task_struct *p, struct rt_mutex *m)
{
}
+
+static inline void __set_task_blocked_on_waking(struct task_struct *p, struct rt_mutex *m)
+{
+}
+
+static inline void set_task_blocked_on_waking(struct task_struct *p, struct rt_mutex *m)
+{
+}
#endif /* !CONFIG_PREEMPT_RT */

static __always_inline bool need_resched(void)
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 4aa79bc..7d35964 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -983,7 +983,7 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
next = waiter->task;

debug_mutex_wake_waiter(lock, waiter);
- clear_task_blocked_on(next, lock);
+ set_task_blocked_on_waking(next, lock);
wake_q_add(&wake_q, next);
}

diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h
index e4a8179..5cd9dfa 100644
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -285,11 +285,11 @@ __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER *waiter,
debug_mutex_wake_waiter(lock, waiter);
#endif
/*
- * When waking up the task to die, be sure to clear the
- * blocked_on pointer. Otherwise we can see circular
- * blocked_on relationships that can't resolve.
+ * When waking up the task to die, be sure to set the
+ * blocked_on to PROXY_WAKING. Otherwise we can see
+ * circular blocked_on relationships that can't resolve.
*/
- clear_task_blocked_on(waiter->task, lock);
+ set_task_blocked_on_waking(waiter->task, lock);
wake_q_add(wake_q, waiter->task);
}

@@ -339,15 +339,15 @@ static bool __ww_mutex_wound(struct MUTEX *lock,
*/
if (owner != current) {
/*
- * When waking up the task to wound, be sure to clear the
- * blocked_on pointer. Otherwise we can see circular
- * blocked_on relationships that can't resolve.
+ * When waking up the task to wound, be sure to set the
+ * blocked_on to PROXY_WAKING. Otherwise we can see
+ * circular blocked_on relationships that can't resolve.
*
* NOTE: We pass NULL here instead of lock, because we
* are waking the mutex owner, who may be currently
* blocked on a different mutex.
*/
- clear_task_blocked_on(owner, NULL);
+ set_task_blocked_on_waking(owner, NULL);
wake_q_add(wake_q, owner);
}
return true;
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index bf4338f..c997d51 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4239,6 +4239,13 @@ int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
ttwu_queue(p, cpu, wake_flags);
}
out:
+ /*
+ * For now, if we've been woken up, clear the task->blocked_on
+ * regardless if it was set to a mutex or PROXY_WAKING so the
+ * task can run. We will need to be more careful later when
+ * properly handling proxy migration
+ */
+ clear_task_blocked_on(p, NULL);
if (success)
ttwu_stat(p, task_cpu(p), wake_flags);

@@ -6600,6 +6607,10 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)

/* Follow blocked_on chain. */
for (p = donor; (mutex = p->blocked_on); p = owner) {
+ /* if its PROXY_WAKING, resched_idle so ttwu can complete */
+ if (mutex == PROXY_WAKING)
+ return proxy_resched_idle(rq);
+
/*
* By taking mutex->wait_lock we hold off concurrent mutex_unlock()
* and ensure @owner sticks around.
@@ -6620,6 +6631,11 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)

owner = __mutex_owner(mutex);
if (!owner) {
+ /*
+ * If there is no owner, clear blocked_on
+ * and return p so it can run and try to
+ * acquire the lock
+ */
__clear_task_blocked_on(p, mutex);
return p;
}