Re: [PATCH sched_ext/for-7.0-fixes] sched_ext: Disable preemption between scx_claim_exit() and kicking helper work

From: Andrea Righi

Date: Wed Feb 25 2026 - 01:44:03 EST


On Tue, Feb 24, 2026 at 07:00:55PM -1000, Tejun Heo wrote:
> scx_claim_exit() atomically sets exit_kind, which prevents scx_error() from
> triggering further error handling. After claiming exit, the caller must kick
> the helper kthread work which initiates bypass mode and teardown.
>
> If the calling task gets preempted between claiming exit and kicking the
> helper work, and the BPF scheduler fails to schedule it back (since error
> handling is now disabled), the helper work is never queued, bypass mode
> never activates, tasks stop being dispatched, and the system wedges.
>
> Disable preemption across scx_claim_exit() and the subsequent work kicking
> in all callers - scx_disable() and scx_vexit(). Add
> lockdep_assert_preemption_disabled() to scx_claim_exit() to enforce the
> requirement.
>
> Fixes: a69040ed57f5 ("sched_ext: Simplify breather mechanism with scx_aborting flag")

I think the same race window already existed even before this commit, we
were just doing atomic_try_cmpxchg() directly, instead of using the
scx_claim_exit() helper.

So, probably the right target should be f0e1a0643a59b ("sched_ext:
Implement BPF extensible scheduler class").

Apart than that, the fix looks good to me.

Reviewed-by: Andrea Righi <arighi@xxxxxxxxxx>

Thanks,
-Andrea

> Cc: stable@xxxxxxxxxxxxxxx # v6.19+
> Signed-off-by: Tejun Heo <tj@xxxxxxxxxx>
> ---
> kernel/sched/ext.c | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
> diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
> index c18e81e8ef51..9280381f8923 100644
> --- a/kernel/sched/ext.c
> +++ b/kernel/sched/ext.c
> @@ -4423,10 +4423,19 @@ static void scx_disable_workfn(struct kthread_work *work)
> scx_bypass(false);
> }
>
> +/*
> + * Claim the exit on @sch. The caller must ensure that the helper kthread work
> + * is kicked before the current task can be preempted. Once exit_kind is
> + * claimed, scx_error() can no longer trigger, so if the current task gets
> + * preempted and the BPF scheduler fails to schedule it back, the helper work
> + * will never be kicked and the whole system can wedge.
> + */
> static bool scx_claim_exit(struct scx_sched *sch, enum scx_exit_kind kind)
> {
> int none = SCX_EXIT_NONE;
>
> + lockdep_assert_preemption_disabled();
> +
> if (!atomic_try_cmpxchg(&sch->exit_kind, &none, kind))
> return false;
>
> @@ -4449,6 +4458,7 @@ static void scx_disable(enum scx_exit_kind kind)
> rcu_read_lock();
> sch = rcu_dereference(scx_root);
> if (sch) {
> + guard(preempt)();
> scx_claim_exit(sch, kind);
> kthread_queue_work(sch->helper, &sch->disable_work);
> }
> @@ -4771,6 +4781,8 @@ static bool scx_vexit(struct scx_sched *sch,
> {
> struct scx_exit_info *ei = sch->exit_info;
>
> + guard(preempt)();
> +
> if (!scx_claim_exit(sch, kind))
> return false;
>
> --
> 2.53.0
>