[PATCH] sched/eevdf: Dequeue the delayed task when changing its schedule policy

From: Chen Yu
Date: Mon Aug 26 2024 - 10:16:38 EST


[Problem Statement]
The following warning was reported:

do not call blocking ops when !TASK_RUNNING; state=1 set at kthread_worker_fn (kernel/kthread.c:?)
WARNING: CPU: 1 PID: 674 at kernel/sched/core.c:8469 __might_sleep

handle_bug
exc_invalid_op
asm_exc_invalid_op
__might_sleep
__might_sleep
kthread_worker_fn
kthread_worker_fn
kthread
__cfi_kthread_worker_fn
ret_from_fork
__cfi_kthread
ret_from_fork_asm

[Symptom]
kthread_worker_fn()
...
repeat:
set_current_state(TASK_INTERRUPTIBLE);
...
if (work) { // false
__set_current_state(TASK_RUNNING);
...
} else if (!freezing(current)) {
schedule();
// after schedule, the state is still *TASK_INTERRUPTIBLE*
}

try_to_freeze()
might_sleep() <--- trigger the warning

[Analysis]
The question is after schedule(), the state remains TASK_INTERRUPTIBLE
rather than TASK_RUNNING. The short answer is, someone has incorrectly
picked the TASK_INTERRUPTIBLE task from the tree. The scenario is described
below, and all steps happen on 1 CPU:

time
|
|
|
v

kthread_worker_fn() <--- t1
set_current_state(TASK_INTERRUPTIBLE)
schedule()
block_task(t1)
dequeue_entity(t1)
t1->sched_delayed = 1

t2 = pick_next_task()
put_prev_task(t1)
enqueue_entity(t1) <--- TASK_INTERRUPTIBLE in the tree

t1 switches to t2

erofs_init_percpu_worker() <--- t2
sched_set_fifo_low(t1)
sched_setscheduler_nocheck(t1)

__sched_setscheduler(t1)
t1->sched_class = &rt_sched_class

check_class_changed(t1)
switched_from_fair(t1)
t1->sched_delayed = 0 <--- gotcha

** from now on, t1 in the tree is TASK_INTERRUPTIBLE **
** and sched_delayed = 0 **

preempt_enable()
preempt_schedule()
t1 = pick_next_task() <--- because sched_delayed = 0, eligible

t2 switches back to t1, now t1 is TASK_INTERRUPTIBLE.

The cause is, switched_from_fair() incorrectly clear the sched_delayed
flag and confuse the pick_next_task() that it thinks a delayed task is a
eligible task(without dequeue it).

[Proposal]
In the __sched_setscheduler() when trying to change the policy of that
delayed task, do not re-enqueue the delayed task thus to avoid being
picked again. The side effect that, the delayed task can not wait for
its 0-vlag time to be dequeued, but its effect should be neglect.

Reported-by: kernel test robot <oliver.sang@xxxxxxxxx>
Closes: https://lore.kernel.org/oe-lkp/202408161619.9ed8b83e-lkp@xxxxxxxxx
Signed-off-by: Chen Yu <yu.c.chen@xxxxxxxxx>
---
kernel/sched/syscalls.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/syscalls.c b/kernel/sched/syscalls.c
index 4fae3cf25a3a..10859536e509 100644
--- a/kernel/sched/syscalls.c
+++ b/kernel/sched/syscalls.c
@@ -818,7 +818,8 @@ int __sched_setscheduler(struct task_struct *p,
if (oldprio < p->prio)
queue_flags |= ENQUEUE_HEAD;

- enqueue_task(rq, p, queue_flags);
+ if (!p->se.sched_delayed)
+ enqueue_task(rq, p, queue_flags);
}
if (running)
set_next_task(rq, p);
--
2.25.1