Re: [PATCH v2] sched/pelt: sync util/runnable_sum with PELT window when propagating
From: Dietmar Eggemann
Date: Wed May 20 2020 - 06:30:08 EST
On 19/05/2020 17:41, Vincent Guittot wrote:
> On Tue, 19 May 2020 at 12:28, Dietmar Eggemann <dietmar.eggemann@xxxxxxx> wrote:
>>
>> On 06/05/2020 17:53, Vincent Guittot wrote:
[...]
>>> diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
>>> index b647d04d9c8b..1feff80e7e45 100644
>>> --- a/kernel/sched/pelt.c
>>> +++ b/kernel/sched/pelt.c
>>> @@ -237,6 +237,30 @@ ___update_load_sum(u64 now, struct sched_avg *sa,
>>> return 1;
>>> }
>>>
>>> +/*
>>> + * When syncing *_avg with *_sum, we must take into account the current
>>> + * position in the PELT segment otherwise the remaining part of the segment
>>> + * will be considered as idle time whereas it's not yet elapsed and this will
>>> + * generate unwanted oscillation in the range [1002..1024[.
>>> + *
>>> + * The max value of *_sum varies with the position in the time segment and is
>>> + * equals to :
>>> + *
>>> + * LOAD_AVG_MAX*y + sa->period_contrib
>>> + *
>>> + * which can be simplified into:
>>> + *
>>> + * LOAD_AVG_MAX - 1024 + sa->period_contrib
>>> + *
>>> + * because LOAD_AVG_MAX*y == LOAD_AVG_MAX-1024
>>
>> Isn't this rather '~' instead of '==', even for y^32 = 0.5 ?
>>
>> 47742 * 0.5^(1/32) ~ 47742 - 1024
>
> With integer precision and the runnable_avg_yN_inv array, you've got
> exactly 1024
Ah, OK, I forgot about this and that this is related to commit
625ed2bf049d ("sched/cfs: Make util/load_avg more stable").