[PATCH] sched: update blocked load of idle cpus

From: Vincent Guittot
Date: Wed Jun 24 2015 - 03:11:50 EST


The load and the util of idle cpus must be updated periodically in order to
decay the blocked part.

If CONFIG_FAIR_GROUP_SCHED is not set, the load and util of idle cpus
are not decayed and stay at the values set before becoming idle.

Signed-off-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
---
Hi Yuyang,

While testing your patchset without CONFIG_FAIR_GROUP_SCHED, i have noticed
that the load of idle cpus stays sometimes to an high value whereas they were
not used for a while because we are not decaying the blocked load.
Futhermore, the peridodic load balance was not pulling tasks onto some idle
cpus because their load stayed high.

This patchset fixes the issue.

Regards,
Vincent

kernel/sched/fair.c | 11 +++++++++++
1 file changed, 11 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c5f18d9..665cc4b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5864,6 +5864,17 @@ static unsigned long task_h_load(struct task_struct *p)
#else
static inline void update_blocked_averages(int cpu)
{
+ struct rq *rq = cpu_rq(cpu);
+ struct cfs_rq *cfs_rq = &rq->cfs;
+ unsigned long flags;
+
+ raw_spin_lock_irqsave(&rq->lock, flags);
+ update_rq_clock(rq);
+
+ update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq))
+
+ raw_spin_unlock_irqrestore(&rq->lock, flags);
+
}

static unsigned long task_h_load(struct task_struct *p)
--
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/