[tip: sched/core] sched: Add assert_balance_callbacks_empty helper

From: tip-bot2 for John Stultz

Date: Fri Apr 03 2026 - 08:32:57 EST


The following commit has been merged into the sched/core branch of tip:

Commit-ID: f9530b3183358bbf945f7c20d4a6e2048061ec50
Gitweb: https://git.kernel.org/tip/f9530b3183358bbf945f7c20d4a6e2048061ec50
Author: John Stultz <jstultz@xxxxxxxxxx>
AuthorDate: Tue, 24 Mar 2026 19:13:22
Committer: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
CommitterDate: Fri, 03 Apr 2026 14:23:40 +02:00

sched: Add assert_balance_callbacks_empty helper

With proxy-exec utilizing pick-again logic, we can end up having
balance callbacks set by the preivous pick_next_task() call left
on the list.

So pull the warning out into a helper function, and make sure we
check it when we pick again.

Suggested-by: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Signed-off-by: John Stultz <jstultz@xxxxxxxxxx>
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Reviewed-by: K Prateek Nayak <kprateek.nayak@xxxxxxx>
Link: https://patch.msgid.link/20260324191337.1841376-8-jstultz@xxxxxxxxxx
---
kernel/sched/core.c | 1 +
kernel/sched/sched.h | 9 ++++++++-
2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index c997d51..acb5894 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6853,6 +6853,7 @@ static void __sched notrace __schedule(int sched_mode)
}

pick_again:
+ assert_balance_callbacks_empty(rq);
next = pick_next_task(rq, rq->donor, &rf);
rq->next_class = next->sched_class;
if (sched_proxy_exec()) {
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index b863bbd..a2629d0 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1857,6 +1857,13 @@ static inline void scx_rq_clock_update(struct rq *rq, u64 clock) {}
static inline void scx_rq_clock_invalidate(struct rq *rq) {}
#endif /* !CONFIG_SCHED_CLASS_EXT */

+static inline void assert_balance_callbacks_empty(struct rq *rq)
+{
+ WARN_ON_ONCE(IS_ENABLED(CONFIG_PROVE_LOCKING) &&
+ rq->balance_callback &&
+ rq->balance_callback != &balance_push_callback);
+}
+
/*
* Lockdep annotation that avoids accidental unlocks; it's like a
* sticky/continuous lockdep_assert_held().
@@ -1873,7 +1880,7 @@ static inline void rq_pin_lock(struct rq *rq, struct rq_flags *rf)

rq->clock_update_flags &= (RQCF_REQ_SKIP|RQCF_ACT_SKIP);
rf->clock_update_flags = 0;
- WARN_ON_ONCE(rq->balance_callback && rq->balance_callback != &balance_push_callback);
+ assert_balance_callbacks_empty(rq);
}

static inline void rq_unpin_lock(struct rq *rq, struct rq_flags *rf)