Re: [PATCH] sched_ext: Fix stale direct dispatch state in ddsp_dsq_id

From: Tejun Heo

Date: Wed Apr 01 2026 - 18:52:16 EST


Hello,

(cc'ing Patrick Somaru. This is the same issue reported on
https://github.com/sched-ext/scx/pull/3482)

On Wed, Apr 01, 2026 at 11:56:19PM +0200, Andrea Righi wrote:
> @p->scx.ddsp_dsq_id can be left set (non-SCX_DSQ_INVALID) in three
> scenarios, causing a spurious WARN_ON_ONCE() in mark_direct_dispatch()
> when the next wakeup's ops.select_cpu() calls scx_bpf_dsq_insert():
>
> 1. Deferred dispatch cancellation: when a task is directly dispatched to
> a remote CPU's local DSQ via ops.select_cpu() or ops.enqueue(), the
> dispatch is deferred (since we can't lock the remote rq while holding
> the current one). If the task is dequeued before processing the
> dispatch in process_ddsp_deferred_locals(), dispatch_dequeue()
> removes the task from the list leaving a stale direct dispatch state.
>
> Fix: clear ddsp_dsq_id and ddsp_enq_flags in the !list_empty branch
> of dispatch_dequeue().
>
> 2. Holding-cpu dispatch race: when dispatch_to_local_dsq() transfers a
> task to another CPU's local DSQ, it sets holding_cpu and releases
> DISPATCHING before locking the source rq. If dequeue wins the race
> and clears holding_cpu, dispatch_enqueue() is never called and
> ddsp_dsq_id is not cleared.
>
> Fix: clear ddsp_dsq_id and ddsp_enq_flags when clearing holding_cpu
> in dispatch_dequeue().

These two just mean that dequeue need to clear it, right?

> 3. Cross-scheduler-instance stale state: When an SCX scheduler exits,
> scx_bypass() iterates over all runnable tasks to dequeue/re-enqueue
> them, but sleeping tasks are not on any runqueue and are not touched.
> If a sleeping task had a deferred dispatch in flight (ddsp_dsq_id
> set) at the time the scheduler exited, the state persists. When a new
> scheduler instance loads and calls scx_enable_task() for all tasks,
> it does not reset this leftover state. The next wakeup's
> ops.select_cpu() then sees a non-INVALID ddsp_dsq_id and triggers:
>
> WARN_ON_ONCE(p->scx.ddsp_dsq_id != SCX_DSQ_INVALID)
>
> Fix: clear ddsp_dsq_id and ddsp_enq_flags in scx_enable_task() before
> calling ops.enable(), ensuring each new scheduler instance starts
> with a clean direct dispatch state per task.

I don't understand this one. If we fix the missing clearing from dequeue,
where would the residual ddsp_dsq_id come from? How would a sleeping task
have ddsp_dsq_id set? Note that select_cpu() + enqueue() call sequence is
atomic w.r.t. dequeue as both are protected by pi_lock.

It's been always a bit bothersome that ddsp_dsq_id was being cleared in
dispatch_enqueue(). It was there to catch the cases where ddsp_dsq_id was
overridden but it just isn't the right place. Can we do the following?

- Add clear_direct_dispatch() which clears ddsp_dsq_id and ddsp_enq_flags.

- Add clear_direct_dispatch() call under the enqueue: in do_enqueue_task()
and remove ddsp clearing from dispatch_enqueue(). This should catch all
cases that ignore ddsp.

- Add clear_direct_dispatch() call after dispatch_enqueue() in
direct_dispatch(). This clears it for the synchronous consumption.

- Add clear_direct_dispatch() call before dispatch_to_local_dsq() call in
process_ddsp_deferred_locals(). Note that the funciton has to cache and
clear ddsp fields *before* calling dispatch_to_local_enq() as the function
will migrate the task to another rq and we can't control what happens to
it afterwrds. Even for the previous synchronous case, it may just be a
better pattern to always cache dsq_id and enq_flags in local vars and
clear p->scx.ddsp* before calling dispatch_enqueue().

- Add clear_direct_dispatch() call to dequeue_task_scx() after
dispatch_dequeue().

I think this should capture all cases and the fields are cleared where they
should be cleared (either consumed or canceled).

Thanks.

--
tejun