Re: [Resend patch v8 01/13] Revert "sched: Introduce temporary FAIR_GROUP_SCHEDdependency for load-tracking"
From: Alex Shi
Date: Wed Jun 26 2013 - 01:07:02 EST
On 06/20/2013 10:18 AM, Alex Shi wrote:
> Remove CONFIG_FAIR_GROUP_SCHED that covers the runnable info, then
> we can use runnable load variables.
>
> Signed-off-by: Alex Shi <alex.shi@xxxxxxxxx>
There are 2 more place need to be reverted too, and merged them into the updated patch.
BTW, this patchset was tested by Fengguang's 0day kbuild system.
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f404468..1a14209 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5858,7 +5858,7 @@ static void switched_from_fair(struct rq *rq, struct task_struct *p)
se->vruntime -= cfs_rq->min_vruntime;
}
-#if defined(CONFIG_FAIR_GROUP_SCHED) && defined(CONFIG_SMP)
+#ifdef CONFIG_SMP
/*
* Remove our load from contribution when we leave sched_fair
* and ensure we don't carry in an old decay_count if we
@@ -5917,7 +5917,7 @@ void init_cfs_rq(struct cfs_rq *cfs_rq)
#ifndef CONFIG_64BIT
cfs_rq->min_vruntime_copy = cfs_rq->min_vruntime;
#endif
-#if defined(CONFIG_FAIR_GROUP_SCHED) && defined(CONFIG_SMP)
+#ifdef CONFIG_SMP
atomic64_set(&cfs_rq->decay_counter, 1);
atomic64_set(&cfs_rq->removed_load, 0);
#endif
--
---