[RFC PATCH v5 0/9] steal tasks to improve CPU utilization

From: Chen Jinghuang

Date: Fri Mar 20 2026 - 02:20:36 EST


When a CPU has no more CFS tasks to run, and idle_balance() fails to
find a task, then attempt to steal a task from an overloaded CPU in the
same LLC. Maintain and use a bitmap of overloaded CPUs to efficiently
identify candidates. To minimize search time, steal the first migratable
task that is found when the bitmap is traversed. For fairness, search
for migratable tasks on an overloaded CPU in order of next to run.

This simple stealing yields a higher CPU utilization than idle_balance()
alone, because the search is cheap, so it may be called every time the CPU
is about to go idle. idle_balance() does more work because it searches
widely for the busiest queue, so to limit its CPU consumption, it declines
to search if the system is too busy. Simple stealing does not offload the
globally busiest queue, but it is much better than running nothing at all.

The bitmap of overloaded CPUs is a new type of sparse bitmap, designed to
reduce cache contention vs the usual bitmap when many threads concurrently
set, clear, and visit elements.

Patch 1 defines the sparsemask type and its operations.

Patches 2, 3, and 4 implement the bitmap of overloaded CPUs.

Patches 5 and 6 refactor existing code for a cleaner merge of later
patches

Patches 7 and 8 implement task stealing using the overloaded CPUs bitmap.

Patch 9 adds schedstats for comparing the new behavior to the old, and
provided as a convenience for developers only, not for integration.

The patch series is based on kernel 7.0.0-rc1. It compiles, boots, and
runs with/without each of CONFIG_SCHED_SMT, CONFIG_SMP, CONFIG_SCHED_DEBUG,
and CONFIG_PREEMPT.

Stealing is controlled by the sched feature SCHED_STEAL, which is enabled
by default.

Stealing imprroves utilization with only a modest CPU overhead in scheduler
code. In the following experiment, hackbench is run with varying numbers
of groups (40 tasks per group), and the delta in /proc/schedstat is shown
for each run, averaged per CPU, augmented with these non-standard stats:

steal - number of times a task is stolen from another CPU.

X6-2: 2 socket * 40 cores * 2 hyperthreads = 160 CPUs
Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz
hackbench <grps> process 100000

baseline
grps time %busy sched idle wake steal
1 2.182 20.00 35876 17905 17958 0
2 2.391 39.00 67753 33808 33921 0
3 2.871 47.00 100944 48966 51538 0
4 2.928 62.00 114489 55171 59059 0
8 4.852 83.00 219907 92961 121703 0

new
grps time %busy sched idle wake steal %speedup
1 2.229 18.00 45450 22691 22751 52 -2.1
2 2.123 40.00 49975 24977 24990 6 12.6
3 2.690 61.00 56118 22641 32780 9073 6.7
4 2.828 80.00 37927 12828 24165 8442 3.5
8 4.120 95.00 85929 8613 57858 11098 17.8

Elapsed time improves by 17.8, and CPU busy utilization is up
by 1 to 18% hitting 95% at peak load.

Note that all test results presented below are based on the
NO_DELAY_DEQUEUE implementation. Although I have implemented the necessary
adaptations to support DELAY_DEQUEUE, I observed a noticeable performance
regression in hackbench when both DELAY_DEQUEUE and SCHED_STEAL are enabled
simultaneously, specifically in heavily overloaded scenarios where the
number of tasks far exceeds the number of CPUs. Any suggestions on how to
address this would be appreciated.

---

Changes from v1 to v2:
- Remove stray find_time hunk from patch 5
- Fix "warning: label out defined but not used" for !CONFIG_SCHED_SMT
- Set SCHED_STEAL_NODE_LIMIT_DEFAULT to 2
- Steal iff avg_idle exceeds the cost of stealing

Changes from v2 to v3:
- Update series for kernel 4.20. Context changes only.

Changes from v3 to v4:
- Avoid 64-bit division on 32-bit processors in compute_skid()
- Replace IF_SMP with inline functions to set idle_stamp
- Push ZALLOC_MASK body into calling function
- Set rq->cfs_overload_cpus in update_top_cache_domain instead of
cpu_attach_domain
- Rewrite sparsemask iterator for complete inlining
- Cull and clean up sparsemask functions and moved all into
sched/sparsemask.h

Changes from v4 to v5:
- Adapt overload_set and overload_clear to support the DELAY_DEQUEUE
feature.
- Rebase onto kernel v7.0.0-rc1 and refresh all benchmark results.
- Remove the previous Patch 9 ("sched/fair: disable stealing if too many
NUMA nodes").

Steve Sistare (9):
sched: Provide sparsemask, a reduced contention bitmap
sched/topology: Provide hooks to allocate data shared per LLC
sched/topology: Provide cfs_overload_cpus bitmap
sched/fair: Dynamically update cfs_overload_cpus
sched/fair: Hoist idle_stamp up from idle_balance
sched/fair: Generalize the detach_task interface
sched/fair: Provide can_migrate_task_llc
sched/fair: Steal work from an overloaded CPU when CPU goes idle
sched/fair: Provide idle search schedstats

include/linux/sched/topology.h | 1 +
kernel/sched/core.c | 31 ++-
kernel/sched/fair.c | 340 +++++++++++++++++++++++++++++++--
kernel/sched/features.h | 6 +
kernel/sched/sched.h | 11 ++
kernel/sched/sparsemask.h | 210 ++++++++++++++++++++
kernel/sched/stats.c | 9 +
kernel/sched/stats.h | 13 ++
kernel/sched/topology.c | 96 +++++++++-
9 files changed, 694 insertions(+), 23 deletions(-)
create mode 100644 kernel/sched/sparsemask.h

--
2.34.1