Re: [PATCH] sched/fair: vruntime should normalize when switching from fair

From: Dietmar Eggemann
Date: Thu Aug 23 2018 - 12:52:15 EST


Hi,

On 08/21/2018 01:54 AM, Miguel de Dios wrote:
On 08/17/2018 11:27 AM, Steve Muckle wrote:
From: John Dias <joaodias@xxxxxxxxxx>

When rt_mutex_setprio changes a task's scheduling class to RT,
we're seeing cases where the task's vruntime is not updated
correctly upon return to the fair class.
Specifically, the following is being observed:
- task is deactivated while still in the fair class
- task is boosted to RT via rt_mutex_setprio, which changes
ÂÂ the task to RT and calls check_class_changed.
- check_class_changed leads to detach_task_cfs_rq, at which point
ÂÂ the vruntime_normalized check sees that the task's state is TASK_WAKING,
ÂÂ which results in skipping the subtraction of the rq's min_vruntime
ÂÂ from the task's vruntime
- later, when the prio is deboosted and the task is moved back
ÂÂ to the fair class, the fair rq's min_vruntime is added to
ÂÂ the task's vruntime, even though it wasn't subtracted earlier.
The immediate result is inflation of the task's vruntime, giving
it lower priority (starving it if there's enough available work).
The longer-term effect is inflation of all vruntimes because the
task's vruntime becomes the rq's min_vruntime when the higher
priority tasks go idle. That leads to a vicious cycle, where
the vruntime inflation repeatedly doubled.

The change here is to detect when vruntime_normalized is being
called when the task is waking but is waking in another class,
and to conclude that this is a case where vruntime has not
been normalized.

Signed-off-by: John Dias <joaodias@xxxxxxxxxx>
Signed-off-by: Steve Muckle <smuckle@xxxxxxxxxx>
---
 kernel/sched/fair.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b39fb596f6c1..14011d7929d8 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9638,7 +9638,8 @@ static inline bool vruntime_normalized(struct task_struct *p)
ÂÂÂÂÂÂ * - A task which has been woken up by try_to_wake_up() and
ÂÂÂÂÂÂ *ÂÂ waiting for actually being woken up by sched_ttwu_pending().
ÂÂÂÂÂÂ */
-ÂÂÂ if (!se->sum_exec_runtime || p->state == TASK_WAKING)
+ÂÂÂ if (!se->sum_exec_runtime ||
+ÂÂÂÂÂÂÂ (p->state == TASK_WAKING && p->sched_class == &fair_sched_class))
ÂÂÂÂÂÂÂÂÂ return true;
ÂÂÂÂÂ return false;
The normalization of vruntime used to exist in task_waking but it was removed and the normalization was moved into migrate_task_rq_fair. The reasoning being that task_waking_fair was only hit when a task is queued onto a different core and migrate_task_rq_fair should do the same work.

However, we're finding that there's one case which migrate_task_rq_fair doesn't hit: that being the case where rt_mutex_setprio changes a task's scheduling class to RT when its scheduled out. The task never hits migrate_task_rq_fair because it is switched to RT and migrates as an RT task. Because of this we're getting an unbounded addition of min_vruntime when the task is re-attached to the CFS runqueue when it loses the inherited priority. The patch above works because now the kernel specifically checks for this case and normalizes accordingly.

Here's the patch I was talking about: https://lore.kernel.org/patchwork/patch/677689/. In our testing we were seeing vruntimes nearly double every time after rt_mutex_setprio boosts the task to RT.

Signed-off-by: Miguel de Dios <migueldedios@xxxxxxxxxx>
Tested-by: Miguel de Dios <migueldedios@xxxxxxxxxx>

I tried to catch this issue on my Arm64 Juno board using pi_test (and a slightly adapted pip_test (usleep_val = 1500 and keep low as cfs)) from rt-tests but wasn't able to do so.

# pi_stress --inversions=1 --duration=1 --groups=1 --sched id=low,policy=cfs

Starting PI Stress Test
Number of thread groups: 1
Duration of test run: 1 seconds
Number of inversions per group: 1
Admin thread SCHED_FIFO priority 4
1 groups of 3 threads will be created
High thread SCHED_FIFO priority 3
Med thread SCHED_FIFO priority 2
Low thread SCHED_OTHER nice 0

# ./pip_stress

In both cases, the cfs task entering rt_mutex_setprio() is queued, so dequeue_task_fair()->dequeue_entity(), which subtracts cfs_rq->min_vruntime from se->vruntime, is called on it before it gets the rt prio.

Maybe it requires a very specific use of the pthread library to provoke this issue by making sure that the cfs tasks really blocks/sleeps?