[PATCH] sched_ext: Use READ_ONCE() for plain reads of scx_watchdog_timeout

From: zhidao su

Date: Tue Mar 03 2026 - 01:05:34 EST


scx_watchdog_timeout is written with WRITE_ONCE() in scx_enable():

WRITE_ONCE(scx_watchdog_timeout, timeout);

However, three read-side accesses use plain reads without the matching
READ_ONCE():

/* check_rq_for_timeouts() - L2824 */
last_runnable + scx_watchdog_timeout

/* scx_watchdog_workfn() - L2852 */
scx_watchdog_timeout / 2

/* scx_enable() - L5179 */
scx_watchdog_timeout / 2

The KCSAN documentation requires that if one accessor uses WRITE_ONCE()
to annotate lock-free access, all other accesses must also use the
appropriate accessor. Plain reads alongside WRITE_ONCE() leave the pair
incomplete and can trigger KCSAN warnings.

Note that scx_tick() already uses the correct READ_ONCE() annotation:

last_check + READ_ONCE(scx_watchdog_timeout)

Fix the three remaining plain reads to match, making all accesses to
scx_watchdog_timeout consistently annotated and KCSAN-clean.

Signed-off-by: zhidao su <suzhidao@xxxxxxxxxx>
---
kernel/sched/ext.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index 147b31a7b3cf..b9247c9f0430 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -2821,7 +2821,7 @@ static bool check_rq_for_timeouts(struct rq *rq)
unsigned long last_runnable = p->scx.runnable_at;

if (unlikely(time_after(jiffies,
- last_runnable + scx_watchdog_timeout))) {
+ last_runnable + READ_ONCE(scx_watchdog_timeout)))) {
u32 dur_ms = jiffies_to_msecs(jiffies - last_runnable);

scx_exit(sch, SCX_EXIT_ERROR_STALL, 0,
@@ -2849,7 +2849,7 @@ static void scx_watchdog_workfn(struct work_struct *work)
cond_resched();
}
queue_delayed_work(system_unbound_wq, to_delayed_work(work),
- scx_watchdog_timeout / 2);
+ READ_ONCE(scx_watchdog_timeout) / 2);
}

void scx_tick(struct rq *rq)
@@ -5176,7 +5176,7 @@ static int scx_enable(struct sched_ext_ops *ops, struct bpf_link *link)
WRITE_ONCE(scx_watchdog_timeout, timeout);
WRITE_ONCE(scx_watchdog_timestamp, jiffies);
queue_delayed_work(system_unbound_wq, &scx_watchdog_work,
- scx_watchdog_timeout / 2);
+ READ_ONCE(scx_watchdog_timeout) / 2);

/*
* Once __scx_enabled is set, %current can be switched to SCX anytime.
--
2.43.0