Re: [PATCH v5 2/2] sched/fair: update scale invariance of PELT

From: Dietmar Eggemann
Date: Thu Nov 01 2018 - 05:38:30 EST


On 10/31/18 10:18 AM, Vincent Guittot wrote:
Hi Dietmar,

On Wed, 31 Oct 2018 at 08:20, Dietmar Eggemann <dietmar.eggemann@xxxxxxx> wrote:

On 10/26/18 6:11 PM, Vincent Guittot wrote:

[...]

static int select_idle_sibling(struct task_struct *p, int prev_cpu, int cpu);
static unsigned long task_h_load(struct task_struct *p);
@@ -764,7 +763,7 @@ void post_init_entity_util_avg(struct sched_entity *se)
* such that the next switched_to_fair() has the
* expected state.
*/
- se->avg.last_update_time = cfs_rq_clock_task(cfs_rq);
+ se->avg.last_update_time = cfs_rq_clock_pelt(cfs_rq);
return;
}
}

There is this 1/cpu scaling of se->avg.util_sum (running_sum) in
update_tg_cfs_runnable() so it can be used to calculate
se->avg.runnable_load_sum (runnable_sum). I guess with your approach
this should be removed.

Yes good catch

Another thing, since you do not need the cpu parameter in accumulate_sum() anymore, you could also get rid of it in ___update_load_sum() and further in __update_load_avg_blocked_se(),
__update_load_avg_cfs_rq() and __update_load_avg_se().

Nitpick: The function header of update_cfs_rq_load_avg() mentions '@now: current time, as per cfs_rq_clock_task()' ... should mention cfs_rq_clock_pelt() instead.

[...]