Re: [PATCH 4/4] sched,fair: Fix PELT integrity for new tasks

From: Peter Zijlstra
Date: Fri Jun 17 2016 - 12:18:45 EST


On Fri, Jun 17, 2016 at 06:02:39PM +0200, Peter Zijlstra wrote:
> So yes, ho-humm, how to go about doing that bestest. Lemme have a play.

This is what I came up with, not entirely pretty, but I suppose it'll
have to do.

---
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -724,6 +724,7 @@ void post_init_entity_util_avg(struct sc
struct cfs_rq *cfs_rq = cfs_rq_of(se);
struct sched_avg *sa = &se->avg;
long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2;
+ u64 now = cfs_rq_clock_task(cfs_rq);

if (cap > 0) {
if (cfs_rq->avg.util_avg != 0) {
@@ -738,7 +739,20 @@ void post_init_entity_util_avg(struct sc
sa->util_sum = sa->util_avg * LOAD_AVG_MAX;
}

- update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq, false);
+ if (entity_is_task(se)) {
+ struct task_struct *p = task_of(se);
+ if (p->sched_class != &fair_sched_class) {
+ /*
+ * For !fair tasks do attach_entity_load_avg()
+ * followed by detach_entity_load_avg() as per
+ * switched_from_fair().
+ */
+ se->avg.last_update_time = now;
+ return;
+ }
+ }
+
+ update_cfs_rq_load_avg(now, cfs_rq, false);
attach_entity_load_avg(cfs_rq, se);
}