Re: [PATCH 14/17] sched/eevdf: Better handle mixed slice length

From: Peter Zijlstra
Date: Wed Apr 05 2023 - 04:36:31 EST


On Tue, Apr 04, 2023 at 01:50:50PM +0000, Joel Fernandes wrote:
> On Tue, Apr 04, 2023 at 11:29:36AM +0200, Peter Zijlstra wrote:

> > Heh, this is actually the correct behaviour. If you have a u=1 and a
> > u=.5 task, you should distribute time on a 2:1 basis, eg. 67% vs 33%.
>
> Splitting like that sounds like starvation of the sleeper to me. If something
> sleeps a lot, it will get even less CPU time on an average than it would if
> there was no contention from the u=1 task.

No, sleeping, per definition, means you're not contending for CPU. What
CFS does, giving them a little boost, is strictly yuck and messes with
latency -- because suddenly you have a task that said it wasn't
competing appear as if it were, but you didn't run it (how could you, it
wasn't there to run) -- but it still needs to catch up.

The reason it does that, is mostly because at the time we didn't want to
do the whole lag thing -- it's somewhat heavy on the u64 mults and 32bit
computing was still a thing :/ So hacks happened.

That said; I'm starting to regret not pushing the EEVDF thing harder
back in 2010 when I first wrote it :/

> And also CGroups will be even more weird than it already is in such a world,
> 2 different containers will not get CPU time distributed properly- say if
> tasks in one container sleep a lot and tasks in another container are CPU
> bound.

Cgroups are an abomination anyway :-) /me runs like hell. But no, I
don't actually expect too much trouble there.

Or rather, as per the above, time distribution is now more proper than
it was :-)