[PATCH v3 2/5] sched: make task_move_group_fair adjust cfs_rq's load in case of queued

From: byungchul . park
Date: Wed Aug 19 2015 - 02:48:53 EST


From: Byungchul Park <byungchul.park@xxxxxxx>

se's average load should be added to new cfs_rq, not only in case of
!queued but also in case of queued.

of course, older code managed cfs_rq's blocked load separately. in that
case, the blocked load was meaningful only in case that the se is in
!queued. but now load tracking code is changed, it is not true. code
adjusting cfs_rq's average load should be changed.

Signed-off-by: Byungchul Park <byungchul.park@xxxxxxx>
---
kernel/sched/fair.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 7475a40..191d9be 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8044,15 +8044,13 @@ static void task_move_group_fair(struct task_struct *p, int queued)
se->vruntime -= cfs_rq_of(se)->min_vruntime;
set_task_rq(p, task_cpu(p));
se->depth = se->parent ? se->parent->depth + 1 : 0;
- if (!queued) {
- cfs_rq = cfs_rq_of(se);
+ cfs_rq = cfs_rq_of(se);
+ if (!queued)
se->vruntime += cfs_rq->min_vruntime;
-
#ifdef CONFIG_SMP
- /* Virtually synchronize task with its new cfs_rq */
- attach_entity_load_avg(cfs_rq, se);
+ /* Virtually synchronize task with its new cfs_rq */
+ attach_entity_load_avg(cfs_rq, se);
#endif
- }
}

void free_fair_sched_group(struct task_group *tg)
--
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/