[tip: sched/core] sched: Replace use of system_unbound_wq with system_dfl_wq

From: tip-bot2 for Marco Crivellari

Date: Tue Feb 24 2026 - 04:13:52 EST


The following commit has been merged into the sched/core branch of tip:

Commit-ID: c2a57380df9dd5df6fae11c6ba9f624b9cad3e6a
Gitweb: https://git.kernel.org/tip/c2a57380df9dd5df6fae11c6ba9f624b9cad3e6a
Author: Marco Crivellari <marco.crivellari@xxxxxxxx>
AuthorDate: Fri, 07 Nov 2025 10:24:52 +01:00
Committer: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
CommitterDate: Mon, 23 Feb 2026 18:04:11 +01:00

sched: Replace use of system_unbound_wq with system_dfl_wq

Currently if a user enqueues a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.

This lack of consistency cannot be addressed without refactoring the API.
For more details see the Link tag below.

This continues the effort to refactor workqueue APIs, which began with
the introduction of new workqueues and a new alloc_workqueue flag in:

commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")

Switch to using system_dfl_wq because system_unbound_wq is going away as part of
a workqueue restructuring.

Suggested-by: Tejun Heo <tj@xxxxxxxxxx>
Signed-off-by: Marco Crivellari <marco.crivellari@xxxxxxxx>
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Link: https://lore.kernel.org/all/20250221112003.1dSuoGyc@xxxxxxxxxxxxx/
Link: https://patch.msgid.link/20251107092452.43399-1-marco.crivellari@xxxxxxxx
---
kernel/sched/core.c | 4 ++--
kernel/sched/ext.c | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index b7f77c1..bfd280e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5678,7 +5678,7 @@ static void sched_tick_remote(struct work_struct *work)
os = atomic_fetch_add_unless(&twork->state, -1, TICK_SCHED_REMOTE_RUNNING);
WARN_ON_ONCE(os == TICK_SCHED_REMOTE_OFFLINE);
if (os == TICK_SCHED_REMOTE_RUNNING)
- queue_delayed_work(system_unbound_wq, dwork, HZ);
+ queue_delayed_work(system_dfl_wq, dwork, HZ);
}

static void sched_tick_start(int cpu)
@@ -5697,7 +5697,7 @@ static void sched_tick_start(int cpu)
if (os == TICK_SCHED_REMOTE_OFFLINE) {
twork->cpu = cpu;
INIT_DELAYED_WORK(&twork->work, sched_tick_remote);
- queue_delayed_work(system_unbound_wq, &twork->work, HZ);
+ queue_delayed_work(system_dfl_wq, &twork->work, HZ);
}
}

diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index 06cc0a4..a448a84 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -2762,7 +2762,7 @@ static void scx_watchdog_workfn(struct work_struct *work)

cond_resched();
}
- queue_delayed_work(system_unbound_wq, to_delayed_work(work),
+ queue_delayed_work(system_dfl_wq, to_delayed_work(work),
scx_watchdog_timeout / 2);
}

@@ -5059,7 +5059,7 @@ static int scx_enable(struct sched_ext_ops *ops, struct bpf_link *link)

WRITE_ONCE(scx_watchdog_timeout, timeout);
WRITE_ONCE(scx_watchdog_timestamp, jiffies);
- queue_delayed_work(system_unbound_wq, &scx_watchdog_work,
+ queue_delayed_work(system_dfl_wq, &scx_watchdog_work,
scx_watchdog_timeout / 2);

/*