Re: [PATCH v2 6/7] sched/fair: Revert 6d71a9c61604 ("sched/fair: Fix EEVDF entity placement bug causing scheduling lag")

From: William Montaz

Date: Tue Mar 24 2026 - 06:15:22 EST


Hi,

> Zicheng Qu reported that, because avg_vruntime() always includes
> cfs_rq->curr, when ->on_rq, place_entity() doesn't work right.

> Specifically, the lag scaling in place_entity() relies on
> avg_vruntime() being the state *before* placement of the new entity.
> However in this case avg_vruntime() will actually already include the
> entity, which breaks things.

This has proven to be harmful on our production cluster using kernel version 6.18.19

We witness a parent cgroup entity (/kubepods.slice in our case) changing very frequently load_avg figures,
which leads to calling entity_pick->update_cfs_group->reweight_entity very often (pretty much at all entity_tick call).

If a cpu hogging task is member of this cgroup and bound to a CPU,
we observe starvation of processes bound to that same CPU but not being members of this cgroup
(kworkers for ceph rbd in our production case).

Looking at /sys/kernel/debug/sched/debug, we can indeed see that cfs_rq[0]:/ .avg_vruntime and .zero_vruntime
continuously move back in time while .left_deadline and .left_vruntime are stuck.

This is likely due to the wrong lag calculation of the cgroup entity within the root cgroup.

We can reproduce that in a sandboxed manner doing the following:
* create a cgroup 'CG'
* run a cpu intensive task 'offender', bound to a CPU
* move the task to cgroup 'CG'
* run a cpu intensive task 'victim' bound to the same CPU
* To reproduce the frequent call to reweight_entity, we change rapidly CG/cpu.weight from 99, 100, 101 and loop
* 'victim' will stop running

I use the following script to reproduce:

---
#!/bin/bash
TARGET_CPU=0
CG_PATH="/sys/fs/cgroup/test_reweight"

cat << 'EOF' > heartbeat.c
#include <stdio.h>
#include <time.h>
#include <stdint.h>
int main() {
struct timespec last, now;
uint64_t count = 0;
clock_gettime(CLOCK_MONOTONIC, &last);
while (1) {
count++;
clock_gettime(CLOCK_MONOTONIC, &now);
long delta_ms = (now.tv_sec - last.tv_sec) * 1000 + (now.tv_nsec - last.tv_nsec) / 1000000;
if (delta_ms >= 500) {
printf("Tick: %lu iterations (delta %ld ms)\n", count, delta_ms);
fflush(stdout);
count = 0;
last = now;
}
}
return 0;
}
EOF

gcc -O2 heartbeat.c -o heartbeat

mkdir -p "$CG_PATH"
echo "+cpu" > /sys/fs/cgroup/cgroup.subtree_control

taskset -c $TARGET_CPU yes > /dev/null &
PID_YES=$!
echo $PID_YES > "$CG_PATH/cgroup.procs"

taskset -c $TARGET_CPU ./heartbeat &
PID_HEARTBEAT=$!

echo "5 seconds observation..."
sleep 5

echo "Jittering on $CG_PATH/cpu.weight..."
trap "kill $PID_YES $PID_HEARTBEAT; rmdir $CG_PATH; rm heartbeat.c; rm heartbeat; exit" SIGINT SIGTERM
while true; do
echo 99 > "$CG_PATH/cpu.weight"
echo 100 > "$CG_PATH/cpu.weight"
echo 101 > "$CG_PATH/cpu.weight"
done
---

I tested the following versions:
* LTS 5.10.252, 5.15.202, 6.1.166, 6.6.129, 6.12.77 --> no issue
* LTS 6.18.19 has the issue
* Stable 6.19.9 has the issue
* Mainline 7.0-rc5 has the issue
* Tip 7.0.0-rc5+ no issue

Finally, I applied the patch to 6.18.19 LTS which solves the issue. However, we do not benefit from previous patches
such as [PATCH v2 5/7] sched/fair: Increase weight bits for avg_vruntime.

Thus I would prefer to let you decide how you want to adress backport on 6.18

If you want I can share my patch file, let me know.

Best regards