Re: [PATCHSET v2 sched_ext/for-7.1] sched_ext: Implement SCX_ENQ_IMMED
From: Andrea Righi
Date: Fri Mar 13 2026 - 15:22:13 EST
On Fri, Mar 13, 2026 at 01:31:08AM -1000, Tejun Heo wrote:
> Hello,
>
> Currently, BPF schedulers that want to ensure tasks don't linger on local
> DSQs behind other tasks or on CPUs taken by higher-priority scheduling
> classes must resort to hooking the sched_switch tracepoint or implementing
> the now-deprecated ops.cpu_acquire/release(). Both approaches are cumbersome
> and partial - sched_switch doesn't handle cases where a local DSQ ends up
> with multiple tasks queued, which can be difficult to control perfectly.
> cpu_release() is even more limited, missing cases like a higher-priority
> task waking up while an idle CPU is waking up to an SCX task. Neither can
> atomically determine whether a CPU is truly available at the moment of
> dispatch.
>
> SCX_ENQ_IMMED replaces these with a single dispatch flag that provides a
> kernel-enforced guarantee: a task dispatched with IMMED either gets on the
> CPU immediately, or gets reenqueued back to the BPF scheduler. It will never
> linger on a local DSQ behind other tasks or be silently put back after
> preemption. This gives BPF schedulers comprehensive latency control directly
> in the dispatch path.
>
> The protection is persistent - it survives SAVE/RESTORE cycles, slice
> extensions and higher-priority class preemptions. If an IMMED task is
> preempted while running, it gets reenqueued through ops.enqueue() with
> SCX_TASK_REENQ_PREEMPTED instead of silently placed back on the local DSQ.
>
> This also enables opportunistic CPU sharing across sub-schedulers. Without
> IMMED, a sub-scheduler can stuff the local DSQ of a shared CPU, making it
> difficult for others to use. With IMMED, tasks only stay on a CPU when they
> can actually run, keeping CPUs available for other schedulers.
>
> Patches 1-2 are prep refactoring. Patch 3 implements SCX_ENQ_IMMED. Patches
> 4-5 plumb enq_flags through the consume and move_to_local paths so IMMED
> works on those paths too. Patch 6 adds SCX_OPS_ALWAYS_ENQ_IMMED.
>
> v2: - Split prep patches out of main IMMED patch (#1, #2).
> - Rewrite is_curr_done() as rq_is_open() using rq->next_class and
> implement wakeup_preempt_scx() for complete higher-class preemption
> coverage (#3).
> - Track IMMED persistently in p->scx.flags and reenqueue
> preempted-while-running tasks through ops.enqueue() (#3).
> - Drop "disallow setting slice to zero" patch - no longer needed with
> rq_is_open() approach.
> - Plumb enq_flags through consume and move_to_local paths (#4, #5).
> - Cover scx_bpf_dsq_move_to_local() in OPS_ALWAYS_IMMED (#6).
> - Remove obsolete sched_switch tracepoint and cpu_release handlers
> from scx_qmap, add IMMED stress test (#6) (Andrea Righi).
>
> v1: https://lore.kernel.org/r/20260307002817.1298341-1-tj@xxxxxxxxxx
Only found a small typo in patch 3, everything else looks good to me.
Reviewed-by: Andrea Righi <arighi@xxxxxxxxxx>
Thanks,
-Andrea