Re: [PATCH] sched/fair: Fix util_est UTIL_AVG_UNCHANGED handling
From: Dietmar Eggemann
Date: Thu May 27 2021 - 18:38:23 EST
On 27/05/2021 07:41, Xuewen Yan wrote:
> Hi
>
> On Wed, May 26, 2021 at 10:59 PM Dietmar Eggemann
> <dietmar.eggemann@xxxxxxx> wrote:
>>
>> On 19/05/2021 18:06, Vincent Donnefort wrote:
>>> On Fri, May 14, 2021 at 12:37:48PM +0200, Dietmar Eggemann wrote:
[...]
>> diff --git a/include/linux/sched.h b/include/linux/sched.h
>> index c7e7d50e2fdc..0a0bca694536 100644
>> --- a/include/linux/sched.h
>> +++ b/include/linux/sched.h
>> @@ -357,6 +357,16 @@ struct util_est {
>> #define UTIL_EST_WEIGHT_SHIFT 2
>> } __attribute__((__aligned__(sizeof(u64))));
>>
>> +/*
>> + * This flag is used to synchronize util_est with util_avg updates.
>> + * When a task is dequeued, its util_est should not be updated if its util_avg
>> + * has not been updated in the meantime.
>> + * This information is mapped into the MSB bit of util_est.enqueued at dequeue
>> + * time. Since max value of util_est.enqueued for a task is 1024 (PELT util_avg
>> + * for a task) it is safe to use MSB.
>> + */
>> +#define UTIL_AVG_UNCHANGED 0x80000000
>> +
>
> IMHO, Maybe it would be better to put it in the util_est structure
> just like UTIL_EST_WEIGHT_SHIFT?
Yeah, can do.
I just realized that 'kernel/sched/pelt.h' does not include
<linux/sched.h> directly (or indirectly via "sched.h". But I can easily
move cfs_se_util_change() (which uses UTIL_AVG_UNCHANGED) from pelt.h to
pelt.c, its only consumer anyway.