Re: [PATCH v2] sched_ext: Fix stale direct dispatch state in ddsp_dsq_id

From: Andrea Righi

Date: Thu Apr 02 2026 - 17:07:16 EST


Hi Tejun,

On Thu, Apr 02, 2026 at 10:10:20AM -1000, Tejun Heo wrote:
> Hello, Andrea.
>
> On Thu, Apr 02, 2026 at 10:57:43AM +0200, Andrea Righi wrote:
> > @p->scx.ddsp_dsq_id can be left set (non-SCX_DSQ_INVALID) triggering a
> > spurious warning in mark_direct_dispatch() when the next wakeup's
> > ops.select_cpu() calls scx_bpf_dsq_insert(), such as:
> >
> > WARNING: kernel/sched/ext.c:1273 at scx_dsq_insert_commit+0xcd/0x140
> >
> > The root cause is that ddsp_dsq_id was only cleared in dispatch_enqueue(),
> > which is not reached in all paths that consume or cancel a direct dispatch
> > verdict. Instead, clear it at the right places:
> >
> > - direct_dispatch(): cache the direct dispatch state in local variables
> > and clear it before dispatch_enqueue() on the synchronous path. For
> > the deferred path, the direct dispatch state must remain set until
> > process_ddsp_deferred_locals() consumes them.
> >
> > - process_ddsp_deferred_locals(): cache the dispatch state in local
> > variables and clear it before calling dispatch_to_local_dsq(), which
> > may migrate the task to another rq.
> >
> > - do_enqueue_task(): clear the dispatch state on the enqueue path
> > (local/global/bypass fallbacks), where the direct dispatch verdict is
> > ignored.
> >
> > - dequeue_task_scx(): clear the dispatch state after dispatch_dequeue()
> > to handle both the deferred dispatch cancellation and the holding_cpu
> > race, covering all cases where a pending direct dispatch is
> > cancelled.
> >
> > - scx_disable_task(): clear the direct dispatch state when
> > transitioning a task out of the current scheduler. Waking tasks may
> > have had the direct dispatch state set by the outgoing scheduler's
> > ops.select_cpu() and then been queued on a wake_list via
> > ttwu_queue_wakelist(), when SCX_OPS_ALLOW_QUEUED_WAKEUP is set. Such
> > tasks are not on the runqueue and are not iterated by scx_bypass(),
> > so their direct dispatch state won't be cleared. Without this clear,
> > when the new scheduler calls scx_enable_task() for these tasks, any
> > subsequent ops.select_cpu() call that tries to direct dispatch the
> > task will trigger the WARN_ON_ONCE() in mark_direct_dispatch().
>
> Can you add an abbreviated version of the above as functio comment on
> clear_direct_dispatch()?

Ack.

>
> > static void direct_dispatch(struct scx_sched *sch, struct task_struct *p,
> > u64 enq_flags)
> > {
> ...
> > @@ -1303,6 +1301,12 @@ static void direct_dispatch(struct scx_sched *sch, struct task_struct *p,
> > if (dsq->id == SCX_DSQ_LOCAL && dsq != &rq->scx.local_dsq) {
> > unsigned long opss;
> >
> > + /*
> > + * Update the direct dispatch state and keep it until
> > + * process_ddsp_deferred_locals() consumes it.
> > + */
> > + p->scx.ddsp_enq_flags = ddsp_enq_flags;
>
> I know I suggested it but this looks kinda odd. How about we keep the
> original p->scx.ddsp_enq_flags |= enq_flags above and then do
>
> ...
>
> Cache enq_flags here?

Ack.

>
> > + clear_direct_dispatch(p);
> > + dispatch_enqueue(sch, dsq, p, ddsp_enq_flags | SCX_ENQ_CLEAR_OPSS);
> > }
> >
> > static bool scx_rq_online(struct rq *rq)
> ...
> > @@ -3147,6 +3155,8 @@ static bool task_dead_and_done(struct task_struct *p)
> >
> > lockdep_assert_rq_held(rq);
> >
> > + clear_direct_dispatch(p);
>
> This is task_dead_and_done(), not scx_disable_task(). Is this intended?

No, I messed up. My initial patch was based on for-7.1, I backported to
for-7.0-fixes and this chunk was applied to the wrong function. Thanks for
catching it!

I'll send a new version.

-Andrea