Re: [PATCH 12/24] sched/fair: Prepare exit/cleanup paths for delayed_dequeue

From: Chen Yu
Date: Tue Aug 27 2024 - 23:07:00 EST


On 2024-08-27 at 17:17:20 +0800, Chen Yu wrote:
> On 2024-07-27 at 12:27:44 +0200, Peter Zijlstra wrote:
> > When dequeue_task() is delayed it becomes possible to exit a task (or
> > cgroup) that is still enqueued. Ensure things are dequeued before
> > freeing.
> >
> > NOTE: switched_from_fair() causes spurious wakeups due to clearing
> > sched_delayed after enqueueing a task in another class that should've
> > been dequeued. This *should* be harmless.
> >
>
> It might bring some expected behavior in some corner cases reported here:
> https://lore.kernel.org/lkml/202408161619.9ed8b83e-lkp@xxxxxxxxx/
> As the block task might return from schedule() with TASK_INTERRUPTIBLE.
>
> We cooked a patch to workaround it(as below).
>
> thanks,
> Chenyu
>
> >From 9251b25073d43aeac04a6ee69b590fbfa1b8e1a5 Mon Sep 17 00:00:00 2001
> From: Chen Yu <yu.c.chen@xxxxxxxxx>
> Date: Mon, 26 Aug 2024 22:16:38 +0800
> Subject: [PATCH] sched/eevdf: Dequeue the delayed task when changing its
> schedule policy
>
> [Problem Statement]
> The following warning was reported:
>
> do not call blocking ops when !TASK_RUNNING; state=1 set at kthread_worker_fn (kernel/kthread.c:?)
> WARNING: CPU: 1 PID: 674 at kernel/sched/core.c:8469 __might_sleep
>
> handle_bug
> exc_invalid_op
> asm_exc_invalid_op
> __might_sleep
> __might_sleep
> kthread_worker_fn
> kthread_worker_fn
> kthread
> __cfi_kthread_worker_fn
> ret_from_fork
> __cfi_kthread
> ret_from_fork_asm
>
> [Symptom]
> kthread_worker_fn()
> ...
> repeat:
> set_current_state(TASK_INTERRUPTIBLE);
> ...
> if (work) { // false
> __set_current_state(TASK_RUNNING);
> ...
> } else if (!freezing(current)) {
> schedule();
> // after schedule, the state is still *TASK_INTERRUPTIBLE*
> }
>
> try_to_freeze()
> might_sleep() <--- trigger the warning
>
> [Analysis]
> The question is after schedule(), the state remains TASK_INTERRUPTIBLE
> rather than TASK_RUNNING. The short answer is, someone has incorrectly
> picked the TASK_INTERRUPTIBLE task from the tree. The scenario is described
> below, and all steps happen on 1 CPU:
>
> time
> |
> |
> |
> v
>
> kthread_worker_fn() <--- t1
> set_current_state(TASK_INTERRUPTIBLE)
> schedule()
> block_task(t1)
> dequeue_entity(t1)
> t1->sched_delayed = 1
>
> t2 = pick_next_task()
> put_prev_task(t1)
> enqueue_entity(t1) <--- TASK_INTERRUPTIBLE in the tree
>
> t1 switches to t2
>
> erofs_init_percpu_worker() <--- t2
> sched_set_fifo_low(t1)
> sched_setscheduler_nocheck(t1)
>
> __sched_setscheduler(t1)
> t1->sched_class = &rt_sched_class
>
> check_class_changed(t1)
> switched_from_fair(t1)
> t1->sched_delayed = 0 <--- gotcha
>
> ** from now on, t1 in the tree is TASK_INTERRUPTIBLE **
> ** and sched_delayed = 0 **
>
> preempt_enable()
> preempt_schedule()
> t1 = pick_next_task() <--- because sched_delayed = 0, eligible
>
> t2 switches back to t1, now t1 is TASK_INTERRUPTIBLE.
>
> The cause is, switched_from_fair() incorrectly clear the sched_delayed
> flag and confuse the pick_next_task() that it thinks a delayed task is a
> eligible task(without dequeue it).
>

Valentin pointed out that after the requeue, the t1 is in the new RT priolist.
So the value of sched_delayed does not matter much. The problem is
that rt priolist has a TASK_INTERRUPTIBLE task to be picked by next
schedule(). There is a fix from peter to dequeue this task in switched_from_fair(),
which can fix this problem. But I think the current proposal can save one extra
enqueue/dequeue operation, no?

thanks,
Chenyu

> [Proposal]
> In the __sched_setscheduler() when trying to change the policy of that
> delayed task, do not re-enqueue the delayed task thus to avoid being
> picked again. The side effect that, the delayed task can not wait for
> its 0-vlag time to be dequeued, but its effect should be neglect.
>
> Reported-by: kernel test robot <oliver.sang@xxxxxxxxx>
> Closes: https://lore.kernel.org/oe-lkp/202408161619.9ed8b83e-lkp@xxxxxxxxx
> Signed-off-by: Chen Yu <yu.c.chen@xxxxxxxxx>
> ---
> kernel/sched/syscalls.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/syscalls.c b/kernel/sched/syscalls.c
> index 4fae3cf25a3a..10859536e509 100644
> --- a/kernel/sched/syscalls.c
> +++ b/kernel/sched/syscalls.c
> @@ -818,7 +818,8 @@ int __sched_setscheduler(struct task_struct *p,
> if (oldprio < p->prio)
> queue_flags |= ENQUEUE_HEAD;
>
> - enqueue_task(rq, p, queue_flags);
> + if (!p->se.sched_delayed)
> + enqueue_task(rq, p, queue_flags);
> }
> if (running)
> set_next_task(rq, p);
> --
> 2.25.1
>