[PATCHSET v2 sched_ext/for-7.1] sched_ext: Implement SCX_ENQ_IMMED

From: Tejun Heo

Date: Fri Mar 13 2026 - 07:31:20 EST


Hello,

Currently, BPF schedulers that want to ensure tasks don't linger on local
DSQs behind other tasks or on CPUs taken by higher-priority scheduling
classes must resort to hooking the sched_switch tracepoint or implementing
the now-deprecated ops.cpu_acquire/release(). Both approaches are cumbersome
and partial - sched_switch doesn't handle cases where a local DSQ ends up
with multiple tasks queued, which can be difficult to control perfectly.
cpu_release() is even more limited, missing cases like a higher-priority
task waking up while an idle CPU is waking up to an SCX task. Neither can
atomically determine whether a CPU is truly available at the moment of
dispatch.

SCX_ENQ_IMMED replaces these with a single dispatch flag that provides a
kernel-enforced guarantee: a task dispatched with IMMED either gets on the
CPU immediately, or gets reenqueued back to the BPF scheduler. It will never
linger on a local DSQ behind other tasks or be silently put back after
preemption. This gives BPF schedulers comprehensive latency control directly
in the dispatch path.

The protection is persistent - it survives SAVE/RESTORE cycles, slice
extensions and higher-priority class preemptions. If an IMMED task is
preempted while running, it gets reenqueued through ops.enqueue() with
SCX_TASK_REENQ_PREEMPTED instead of silently placed back on the local DSQ.

This also enables opportunistic CPU sharing across sub-schedulers. Without
IMMED, a sub-scheduler can stuff the local DSQ of a shared CPU, making it
difficult for others to use. With IMMED, tasks only stay on a CPU when they
can actually run, keeping CPUs available for other schedulers.

Patches 1-2 are prep refactoring. Patch 3 implements SCX_ENQ_IMMED. Patches
4-5 plumb enq_flags through the consume and move_to_local paths so IMMED
works on those paths too. Patch 6 adds SCX_OPS_ALWAYS_ENQ_IMMED.

v2: - Split prep patches out of main IMMED patch (#1, #2).
- Rewrite is_curr_done() as rq_is_open() using rq->next_class and
implement wakeup_preempt_scx() for complete higher-class preemption
coverage (#3).
- Track IMMED persistently in p->scx.flags and reenqueue
preempted-while-running tasks through ops.enqueue() (#3).
- Drop "disallow setting slice to zero" patch - no longer needed with
rq_is_open() approach.
- Plumb enq_flags through consume and move_to_local paths (#4, #5).
- Cover scx_bpf_dsq_move_to_local() in OPS_ALWAYS_IMMED (#6).
- Remove obsolete sched_switch tracepoint and cpu_release handlers
from scx_qmap, add IMMED stress test (#6) (Andrea Righi).

v1: https://lore.kernel.org/r/20260307002817.1298341-1-tj@xxxxxxxxxx

Based on sched_ext/for-7.1 (bd377af09701).

0001-sched_ext-Split-task_should_reenq-into-local-and-use.patch
0002-sched_ext-Add-scx_vet_enq_flags-and-plumb-dsq_id-int.patch
0003-sched_ext-Implement-SCX_ENQ_IMMED.patch
0004-sched_ext-Plumb-enq_flags-through-the-consume-path.patch
0005-sched_ext-Add-enq_flags-to-scx_bpf_dsq_move_to_local.patch
0006-sched_ext-Add-SCX_OPS_ALWAYS_ENQ_IMMED-ops-flag.patch

Git tree:

git://git.kernel.org/pub/scm/linux/kernel/git/tj/sched_ext.git scx-enq-immed-v2

include/linux/sched/ext.h | 5 +
kernel/sched/ext.c | 350 +++++++++++++++++++++++++++----
kernel/sched/ext_internal.h | 56 ++++-
kernel/sched/sched.h | 2 +
tools/sched_ext/include/scx/compat.bpf.h | 20 +-
tools/sched_ext/include/scx/compat.h | 1 +
tools/sched_ext/scx_central.bpf.c | 4 +-
tools/sched_ext/scx_cpu0.bpf.c | 2 +-
tools/sched_ext/scx_flatcg.bpf.c | 6 +-
tools/sched_ext/scx_qmap.bpf.c | 70 +++----
tools/sched_ext/scx_qmap.c | 13 +-
tools/sched_ext/scx_sdt.bpf.c | 2 +-
tools/sched_ext/scx_simple.bpf.c | 2 +-
13 files changed, 435 insertions(+), 98 deletions(-)

--
tejun