Re: [PATCH 0/5] sched: Lazy preemption muck
From: Thomas Gleixner
Date: Thu Oct 10 2024 - 04:31:18 EST
On Thu, Oct 10 2024 at 11:12, Tianchen Ding wrote:
> On 2024/10/10 04:43, Steven Rostedt wrote:
>> Perhaps these cond_resched() is proper? That is, the need_resched() /
>> cond_resched() is not something that is being done for PREEMPT_NONE, but
>> for preempt/voluntary kernels too. Maybe these cond_resched() should stay?
>> If we spin in the loop for one more tick, that is actually changing the
>> behavior of PREEMPT_NONE and PREEMPT_VOLUNTARY, as the need_resched()/cond_resched()
>> helps with latency. If we just wait for the next tick, these loops (and
>> there's a lot of them) will all now run for one tick longer than if
>> PREEMPT_NONE or PREEMPT_VOLUNTARY were set today.
>>
>
> Agree.
>
> And for PREEMPT_LAZIEST, this becomes worse. The fair_class tasks will be
> delayed more than 1 tick. They may be starved until a non-fair class task comes
> to "save" them.
Everybody agreed already that PREEMPT_LAZIEST is silly and not going to
happen. Nothing to see here.
> cond_resched() is designed for NONE/VOLUNTARY to avoid spinning in kernel and
> prevent softlockup. However, it is a nop in PREEMPT_LAZIEST, and things may be
> broken...
cond_resched() is not designed. It's an ill-defined bandaid and the
purpose of LAZY is to remove it completely along with the preemption
models which depend on it.
Thanks,
tglx