Re: [PATCH 14/17] sched/eevdf: Better handle mixed slice length
From: Joel Fernandes
Date: Sun Apr 16 2023 - 01:20:26 EST
> On Apr 14, 2023, at 1:18 PM, Phil Auld <pauld@xxxxxxxxxx> wrote:
>
> On Wed, Apr 05, 2023 at 04:05:55PM -0400 Joel Fernandes wrote:
>>> On Wed, Apr 5, 2023 at 4:36 AM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>>>
>>> On Tue, Apr 04, 2023 at 01:50:50PM +0000, Joel Fernandes wrote:
>>>>> On Tue, Apr 04, 2023 at 11:29:36AM +0200, Peter Zijlstra wrote:
>>>
>>>>> Heh, this is actually the correct behaviour. If you have a u=1 and a
>>>>> u=.5 task, you should distribute time on a 2:1 basis, eg. 67% vs 33%.
>>>>
>>>> Splitting like that sounds like starvation of the sleeper to me. If something
>>>> sleeps a lot, it will get even less CPU time on an average than it would if
>>>> there was no contention from the u=1 task.
>>>
>>> No, sleeping, per definition, means you're not contending for CPU. What
>>> CFS does, giving them a little boost, is strictly yuck and messes with
>>> latency -- because suddenly you have a task that said it wasn't
>>> competing appear as if it were, but you didn't run it (how could you, it
>>> wasn't there to run) -- but it still needs to catch up.
>>>
>>> The reason it does that, is mostly because at the time we didn't want to
>>> do the whole lag thing -- it's somewhat heavy on the u64 mults and 32bit
>>> computing was still a thing :/ So hacks happened.
>>
>> Also you have the whole "boost tasks" that sleep a lot with CFS right?
>> Like a task handling user input sleeps a lot, but when it wakes up,
>> it gets higher dynamic priority as its vruntime did not advance. I
>> guess EEVDF also gets you the same thing but still messes with the CPU
>> usage?
>>
>>> That said; I'm starting to regret not pushing the EEVDF thing harder
>>> back in 2010 when I first wrote it :/
>>>
>>>> And also CGroups will be even more weird than it already is in such a world,
>>>> 2 different containers will not get CPU time distributed properly- say if
>>>> tasks in one container sleep a lot and tasks in another container are CPU
>>>> bound.
>>>
>>> Cgroups are an abomination anyway :-) /me runs like hell. But no, I
>>> don't actually expect too much trouble there.
>>
>> So, with 2 equally weighted containers, if one has a task that sleeps
>> 50% of the time, and another has a 100% task, then the sleeper will
>> only run 33% of the time? I can see people running containers having a
>> problem with that (a customer running one container gets less CPU than
>> the other.). Sorry if I missed something.
>>
>
> But the 50% sleeper is _asking_ for less CPU. Doing 50% for each would
> mean that when the sleeper task was awake it always ran, always won, to
> the exclusion of any one else. (Assuming 1 CPU...)
>
It sounds like you are saying that if the task busy looped instead of sleeping, it would get more CPU during the time it is not busy looping but doing some real work. That sounds like encouraging abuse to get more perf.
But again, I have not looked too closely at EEVDF or Peters patches. I was just going by Vincents test and was cautioning to not break users who depend on CFS shares..
Cheers,
- Joel
> Cheers,
> Phil
>
>> But yeah I do find the whole EEVDF idea interesting but I admit I have
>> to research it more.
>>
>> - Joel
>>
>
> --
>