Re: [PATCH 0/5] sched: Lazy preemption muck

From: Tianchen Ding
Date: Wed Oct 09 2024 - 23:12:27 EST


On 2024/10/10 04:43, Steven Rostedt wrote:

[...]

Hmm, but then again...

Perhaps these cond_resched() is proper? That is, the need_resched() /
cond_resched() is not something that is being done for PREEMPT_NONE, but
for preempt/voluntary kernels too. Maybe these cond_resched() should stay?
If we spin in the loop for one more tick, that is actually changing the
behavior of PREEMPT_NONE and PREEMPT_VOLUNTARY, as the need_resched()/cond_resched()
helps with latency. If we just wait for the next tick, these loops (and
there's a lot of them) will all now run for one tick longer than if
PREEMPT_NONE or PREEMPT_VOLUNTARY were set today.


Agree.

And for PREEMPT_LAZIEST, this becomes worse. The fair_class tasks will be delayed more than 1 tick. They may be starved until a non-fair class task comes to "save" them.

cond_resched() is designed for NONE/VOLUNTARY to avoid spinning in kernel and prevent softlockup. However, it is a nop in PREEMPT_LAZIEST, and things may be broken...