[PATCH 0/4] sched/rt: mitigate root_domain cache line contention

From: Pan Deng
Date: Sun Jul 06 2025 - 22:31:01 EST


From: Deng Pan <pan.deng@xxxxxxxxx>

When running multi-instance FFmpeg workload in cloud environment,
cache line contention is severe during the access to root_domain data
structures, which significantly degrades performance.

The SUT is a 2-socket machine with 240 physical cores and 480 logical
CPUs. 60 FFmpeg instances are launched, each pinned to 4 physical cores
(8 logical CPUs) for transcoding tasks. Sub-threads use RT priority 99
with FIFO scheduling. FPS is used as score.

Profiling shows the kernel consumes ~20% of CPU cycles, which is
excessive in this scenario. The overhead primarily comes from RT task
scheduling functions like `cpupri_set`, `cpupri_find_fitness`,
`dequeue_pushable_task`, `enqueue_pushable_task`, `pull_rt_task`,
`__find_first_and_bit`, and `__bitmap_and`. This is due to read/write
contention on root_domain cache lines.

The `perf c2c` report, sorted by contention severity, reveals:

root_domain cache line 3:
- `cpupri->pri_to_cpu[0].count` is heavily loaded/stored,
since counts[0] is more frequently updated than others along with a
rt task enqueues an empty runq or dequeues from a non-overloaded runq.
- `rto_mask` is heavily loaded
- `rto_loop_next` and `rto_loop_start` are frequently stored
- `rto_push_work` and `rto_lock` are lightly accessed
- cycles per load: ~10K to 59K.

root_domain cache line 1:
- `rto_count` is frequently loaded/stored
- `overloaded` is heavily loaded
- cycles per load: ~2.8K to 44K

cpumask (bitmap) cache line of cpupri_vec->mask:
- bits are loaded during cpupri_find
- bits are stored during cpupri_set
- cycles per load: ~2.2K to 8.7K

The end cache line of cpupri:
- `cpupri_vec->count` and `mask` contends. The transcoding threads use
rt pri 99, so that the contention occurs in the end.
- cycles per load: ~1.5K to 10.5K

According to above, we propose 4 patches to mitigate the contention.
Patch 1: Reorganize `cpupri_vec`, separate `count`, `mask` fields,
reducing contention on root_domain cache line 3 and cpupri's
last cache line.
Patch 2: Restructure `root_domain` structure to minimize contention of
root_domain cache line 1 and 3 by reordering fields.
Patch 3: Split `root_domain->rto_count` to per-NUMA-node counters,
reducing the contention on root_domain cache line 1.
Patch 4: Split `cpupri_vec->cpumask` to per-NUMA-node bitmaps, reducing
load/store contention on the cpumask bitmap cache line.

Evaluation:

Performance improvements (FPS, relative to baseline):
- Patch 1: +11.0%
- Patch 2: +5.0%
- Patch 3: +4.0%
- Patch 4: +3.8%

Kernel CPU cycle usage reduction:
- Patch 1: 20.0% -> 11.0%
- Patch 2: 20.0% -> 17.7%
- Patch 3: 20.0% -> 18.6%
- Patch 4: 20.0% -> 18.7%

Cycles per load reduction (by perf c2c report):
- Patch 1:
- `root_domain` cache line 3: 10K–59K -> 0.5K–8K
- `cpupri` last cache line: 1.5K–10.5K -> eliminated
- Patch 2:
- `root_domain` cache line 1: 2.8K–44K -> 2.1K–2.7K
- `root_domain` cache line 3: 10K–59K -> eliminated
- Patch 3:
- `root_domain` cache line 1: 2.8K–44K -> eliminated
- Patch 4:
- `cpupri_vec->mask` cache line: 2.2K–8.7K -> 0.5K–2.2K

Comments are appreciated.

Pan Deng (4):
sched/rt: Optimize cpupri_vec layout to mitigate cache line contention
sched/rt: Restructure root_domain to reduce cacheline contention
sched/rt: Split root_domain->rto_count to per-NUMA-node counters
sched/rt: Split cpupri_vec->cpumask to per NUMA node to reduce
contention

kernel/sched/cpupri.c | 200 ++++++++++++++++++++++++++++++++++++----
kernel/sched/cpupri.h | 6 +-
kernel/sched/rt.c | 65 ++++++++++++-
kernel/sched/sched.h | 61 ++++++------
kernel/sched/topology.c | 7 ++
5 files changed, 291 insertions(+), 48 deletions(-)

--
2.43.5