Re: [PATCH] sched/fair: Fix inaccurate h_nr_runnable accounting with delayed dequeue

From: K Prateek Nayak
Date: Tue Jan 21 2025 - 03:09:58 EST


Hello Vincent,

On 1/18/2025 2:00 PM, Vincent Guittot wrote:
On Fri, 17 Jan 2025 at 16:59, K Prateek Nayak <kprateek.nayak@xxxxxxx> wrote:

Hello Vincent,

On 1/17/2025 6:55 PM, Vincent Guittot wrote:
Hi Prateek,

On Fri, 17 Jan 2025 at 11:59, K Prateek Nayak <kprateek.nayak@xxxxxxx> wrote:

set_delayed() adjusts cfs_rq->h_nr_runnable for the hierarchy when an
entity is delayed irrespective of whether the entity corresponds to a
task or a cfs_rq.

Consider the following scenario:

root
/ \
A B (*) delayed since B is no longer eligible on root
| |
Task0 Task1 <--- dequeue_task_fair() - task blocks

When Task1 blocks (dequeue_entity() for task's se returns true),
dequeue_entities() will continue adjusting cfs_rq->h_nr_* for the
hierarchy of Task1. However, when the sched_entity corresponding to
cfs_rq B is delayed, set_delayed() will adjust the h_nr_runnable for the
hierarchy too leading to both dequeue_entity() and set_delayed()
decrementing h_nr_runnable for the dequeue of the same task.

A SCHED_WARN_ON() to inspect h_nr_runnable post its update in
dequeue_entities() like below:

cfs_rq->h_nr_runnable -= h_nr_runnable;
SCHED_WARN_ON(((int) cfs_rq->h_nr_runnable) < 0);

is consistently tripped when running wakeup intensive workloads like
hackbench in a cgroup.

This error is self correcting since cfs_rq are per-cpu and cannot
migrate. The entitiy is either picked for full dequeue or is requeued
when a task wakes up below it. Both those paths call clear_delayed()
which again increments h_nr_runnable of the hierarchy without
considering if the entity corresponds to a task or not.

h_nr_runnable will eventually reflect the correct value however in the
interim, the incorrect values can still influence PELT calculation which
uses se->runnable_weight or cfs_rq->h_nr_runnable.

Since only delayed tasks take the early return path in
dequeue_entities() and enqueue_task_fair(), adjust the
h_nr_runnable in {set,clear}_delayed() only when a task is delayed as
this path skips the h_nr_* update loops and returns early.

For entities corresponding to cfs_rq, the h_nr_* update loop in the
caller will do the right thing.

Fixes: 76f2f783294d ("sched/eevdf: More PELT vs DELAYED_DEQUEUE")

You probably mean c2a295bffeaf ("sched/fair: Add new cfs_rq.h_nr_runnable")

You are right! I had done a git blame on set_delayed() ad landed at
commit 76f2f783294d but you are right, it should be c2a295bffeaf
("sched/fair: Add new cfs_rq.h_nr_runnable") when the accounting was
inverted to account runnable tasks. Thank you for pointing that out.

Before we were tracking the opposite h_nr_delayed. Did you see the
problem only on tip/sched/core or also before the rework which added
h_nr_runnable and removed h_nr_delayed

The problem is on tip:sched/core. I did not encounter any anomalies on
76f2f783294d ("sched/eevdf: More PELT vs DELAYED_DEQUEUE")

"h_nr_delayed" was only adjusted in dequeue_entities() for "!seep &&
!delayed" which would imply migration or a save + restore type operation
and the whole "h_nr_delayed" adjusting was contained in
{set,clear}_delayed() for delayed dequeue, finish delayed dequeue, and
requeue.

So I was looking at it wrong when I was investigating on commit
76f2f783294d ("sched/eevdf: More PELT vs DELAYED_DEQUEUE") h_nr_delayed
can never be larger than h_nr_running (h_nr_queued upstream) since the
number of delayed tasks can never cross number of tasks queued below
the given cfs_rq but with the following:

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 97ee48c8bf5e..8e713f241483 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7145,6 +7145,7 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags)
cfs_rq->h_nr_running -= h_nr_running;
cfs_rq->idle_h_nr_running -= idle_h_nr_running;
cfs_rq->h_nr_delayed -= h_nr_delayed;
+ SCHED_WARN_ON(cfs_rq->h_nr_delayed > cfs_rq->h_nr_running);
if (cfs_rq_is_idle(cfs_rq))
idle_h_nr_running = h_nr_running;
@@ -7185,6 +7186,8 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags)
cfs_rq->idle_h_nr_running -= idle_h_nr_running;
cfs_rq->h_nr_delayed -= h_nr_delayed;
+ SCHED_WARN_ON(cfs_rq->h_nr_delayed > cfs_rq->h_nr_running);
+
if (cfs_rq_is_idle(cfs_rq))
idle_h_nr_running = h_nr_running;
--

I can again consistently hit the warning without the fix on 76f2f783294d
("sched/eevdf: More PELT vs DELAYED_DEQUEUE")

I think that the original "Fixes:" tag is indeed right.



I'm going to have a closer look

Your fix looks good to me. I also run some tests after re-adding
h_nr_delayed and checking that h_nr_queued = h_nr_runnable +
h_nr_delayed after each update and I didn't get any warning with your
patch whereas I got one during boot without it (but none after that
during my tests)

Could it be the case that h_nr_delayed counts a tiny bit higher than
the actual number and h_nr_runnable counts a tiny bit lower by the
same amount and they both correct each other to give the correct
h_nr_queued?


Thanks for catching this

Reviewed-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx>

Thank you for reviewing the patch!



Thank you!



Reviewed-by: Gautham R. Shenoy <gautham.shenoy@xxxxxxx>
Tested-by: Swapnil Sapkal <swapnil.sapkal@xxxxxxx>
Signed-off-by: K Prateek Nayak <kprateek.nayak@xxxxxxx>
---

[..snip..]


--
Thanks and Regards,
Prateek


--
Thanks and Regards,
Prateek