Re: [PATCH v2 1/2] sched: proxy-exec: Close race causing workqueue work being delayed
From: K Prateek Nayak
Date: Fri May 01 2026 - 02:44:47 EST
Hello John,
Mostly cosmetic nitpicks. The overall idea looks good.
On 5/1/2026 3:20 AM, John Stultz wrote:
> @@ -2183,18 +2183,56 @@ extern int __cond_resched_rwlock_write(rwlock_t *lock) __must_hold(lock);
> #ifndef CONFIG_PREEMPT_RT
>
> /*
> - * With proxy exec, if a task has been proxy-migrated, it may be a donor
> - * on a cpu that it can't actually run on. Thus we need a special state
> - * to denote that the task is being woken, but that it needs to be
> - * evaluated for return-migration before it is run. So if the task is
> - * blocked_on PROXY_WAKING, return migrate it before running it.
> + * The proxy exec blocked_on pointer value uses the low bit as a latch
> + * value which clarifies if the blocked_on value is used for proxying or
> + * not.
> + *
> + * The state machine looks something like
> + * NULL -> ptr:unlatched -> ptr:latched -> PROXY_WAKING -> NULL
> + *
> + * With some additional transitions:
> + * ptr:unlatched -> NULL (done on current, or via set_task_blocked_on_waking())
> + * ptr:latched -> NULL (done only on current)
> + *
> + * 1) NULL and ptr:unlatched are effectively equivalent, no proxying will occur
> + * 2) ptr:latched is the state when proxying will occur
> + * 3) PROXY_WAKING is used when the task is being woken to ensure we
> + * return-migrate proxy-migrated tasks before running them (note it has
> + * the latch bit set).
> */
> -#define PROXY_WAKING ((struct mutex *)(-1L))
> +#define PROXY_BLOCKED_LATCH (1UL)
> +#define PROXY_BLOCKED_ON_MASK(x) ((struct mutex *)((unsigned long)(x) & ~PROXY_BLOCKED_LATCH))
nit. I think PROXY_BLOCKED_ON_MUTEX() would be a better name since this
is returning the true mutex pointer back. No strong feelings, I'll defer
to others for more comments.
> +#define PROXY_WAKING ((struct mutex *)(-1L)) /* PROXY_WAKING has LATCH bit set */
> +
> +static inline struct mutex *task_is_blocked_on(struct task_struct *p)
I think this can take the role of task_is_blocked() no?
Only one caller for try_to_block_task() will require looking at the
raw blocked_on state but other than that, it is safe for the scheduler
to move around the preempted task until it has grabbed the BO latch.
> +{
> + if (!sched_proxy_exec())
> + return false;
> + return (struct mutex *)((unsigned long)p->blocked_on & PROXY_BLOCKED_LATCH);
> +}
> +
> +static inline void __set_task_blocked_on_latched(struct task_struct *p)
> +{
Are you planning to reuse this sometime later in the series? If not I
think we can convert this to "try_set_task_blocked_on_latch()" and return
false if it finds blocked on having been cleared already.
That way the lock + check in try_to_block_task() can be moved here.
> + lockdep_assert_held_once(&p->blocked_lock);
> + WARN_ON_ONCE(!p->blocked_on);
> + p->blocked_on = (struct mutex *)((unsigned long)p->blocked_on | PROXY_BLOCKED_LATCH);
> +}
> +
> +static inline struct mutex *__get_task_latched_blocked_on(struct task_struct *p)
I think this can be __get_task_blocked_on() ...
> +{
> + if (!task_is_blocked_on(p))
> + return NULL;
> + if (p->blocked_on == PROXY_WAKING)
> + return PROXY_WAKING;
> + return PROXY_BLOCKED_ON_MASK(p->blocked_on);
> +}
>
> static inline struct mutex *__get_task_blocked_on(struct task_struct *p)
... and this can be __get_task_blocked_on_raw() since only one caller in
kernel/locking/mutex.h really cares about the ~PROXY_BLOCKED_LATCH
value outside of this file.
Everything in the sched bits can then simply be __get_task_blocked_on()
and that seems much cleaner.
Thoughts?
> {
> lockdep_assert_held_once(&p->blocked_lock);
> - return p->blocked_on == PROXY_WAKING ? NULL : p->blocked_on;
> + if (p->blocked_on == PROXY_WAKING)
> + return NULL;
> + return PROXY_BLOCKED_ON_MASK(p->blocked_on);
> }
>
> static inline void __set_task_blocked_on(struct task_struct *p, struct mutex *m)
> @@ -2215,6 +2253,8 @@ static inline void __set_task_blocked_on(struct task_struct *p, struct mutex *m)
>
> static inline void __clear_task_blocked_on(struct task_struct *p, struct mutex *m)
> {
> + struct mutex *bo = p->blocked_on;
> +
> /* Currently we serialize blocked_on under the task::blocked_lock */
> lockdep_assert_held_once(&p->blocked_lock);
> /*
> @@ -2222,7 +2262,7 @@ static inline void __clear_task_blocked_on(struct task_struct *p, struct mutex *
> * blocked_on relationships, but make sure we are not
> * clearing the relationship with a different lock.
> */
> - WARN_ON_ONCE(m && p->blocked_on && p->blocked_on != m && p->blocked_on != PROXY_WAKING);
> + WARN_ON_ONCE(m && bo && __get_task_blocked_on(p) != m && bo != PROXY_WAKING);
> p->blocked_on = NULL;
> }
>
> @@ -2242,15 +2282,17 @@ static inline void __set_task_blocked_on_waking(struct task_struct *p, struct mu
> return;
> }
>
> - /* Don't set PROXY_WAKING if blocked_on was already cleared */
> - if (!p->blocked_on)
> + /* Don't set PROXY_WAKING if we are not really blocked_on */
> + if (!task_is_blocked_on(p)) {
> + p->blocked_on = NULL; /* clear if unlatched */
> return;
> + }
> /*
> * There may be cases where we set PROXY_WAKING on tasks that were
> * already set to waking, but make sure we are not changing
> * the relationship with a different lock.
> */
> - WARN_ON_ONCE(m && p->blocked_on != m && p->blocked_on != PROXY_WAKING);
> + WARN_ON_ONCE(m && __get_task_blocked_on(p) != m && p->blocked_on != PROXY_WAKING);
> p->blocked_on = PROXY_WAKING;
> }
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index da20fb6ea25ae..2f912bf698446 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -6599,8 +6599,13 @@ static bool try_to_block_task(struct rq *rq, struct task_struct *p,
> * blocked on a mutex, and we want to keep it on the runqueue
> * to be selectable for proxy-execution.
> */
> - if (!should_block)
> - return false;
> + if (!should_block) {
> + guard(raw_spinlock)(&p->blocked_lock);
> + if (p->blocked_on) {
> + __set_task_blocked_on_latched(p);
> + return false;
> + }
> + }
In my head, this as:
if (!should_block & try_to_latch_task_blocked_on(p))
return false;
seems much cleaner. I'll defer to other for comments.
>
> p->sched_contributes_to_load =
> (task_state & TASK_UNINTERRUPTIBLE) &&
> @@ -6833,7 +6838,7 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
> int owner_cpu;
>
> /* Follow blocked_on chain. */
> - for (p = donor; (mutex = p->blocked_on); p = owner) {
> + for (p = donor; (mutex = __get_task_latched_blocked_on(p)); p = owner) {
> /* if its PROXY_WAKING, do return migration or run if current */
> if (mutex == PROXY_WAKING) {
> if (task_current(rq, p)) {
> @@ -6851,7 +6856,7 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
> guard(raw_spinlock)(&p->blocked_lock);
>
> /* Check again that p is blocked with blocked_lock held */
> - if (mutex != __get_task_blocked_on(p)) {
> + if (mutex != __get_task_latched_blocked_on(p)) {
> /*
> * Something changed in the blocked_on chain and
> * we don't know if only at this level. So, let's
> @@ -7107,7 +7112,7 @@ static void __sched notrace __schedule(int sched_mode)
> struct task_struct *prev_donor = rq->donor;
>
> rq_set_donor(rq, next);
> - if (unlikely(next->blocked_on)) {
> + if (unlikely(task_is_blocked_on(next))) {
> next = find_proxy_task(rq, next, &rf);
> if (!next) {
> zap_balance_callbacks(rq);
--
Thanks and Regards,
Prateek