Re: [PATCH][mmotm] Sched fix stale value in average load per task

From: Balbir Singh
Date: Wed Nov 12 2008 - 06:39:56 EST


* Balbir Singh <balbir@xxxxxxxxxxxxxxxxxx> [2008-11-12 16:19:00]:

> + else
> + rq->avg_load_per_task = 0;
>
> return rq->avg_load_per_task;

On second thoughts, does this look better? (Based on Mike Galbraith's
suggestion)


cpu_avg_load_per_task() returns a stale value when nr_running is 0. It returns
an older stale (caculated when nr_running was non zero) value. This patch
returns and sets rq->avg_load_per_task to zero when nr_running is 0.

Signed-off-by: Balbir Singh <balbir@xxxxxxxxxxxxxxxxxx>
---

kernel/sched.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)

diff -puN kernel/sched.c~sched-fix-stale-load-avg kernel/sched.c
--- linux-2.6.28-rc4/kernel/sched.c~sched-fix-stale-load-avg 2008-11-12 15:55:38.000000000 +0530
+++ linux-2.6.28-rc4-balbir/kernel/sched.c 2008-11-12 16:40:26.000000000 +0530
@@ -571,8 +571,6 @@ struct rq {
int cpu;
int online;

- unsigned long avg_load_per_task;
-
struct task_struct *migration_thread;
struct list_head migration_queue;
#endif
@@ -1428,10 +1426,10 @@ static unsigned long cpu_avg_load_per_ta
{
struct rq *rq = cpu_rq(cpu);

- if (rq->nr_running)
- rq->avg_load_per_task = rq->load.weight / rq->nr_running;
-
- return rq->avg_load_per_task;
+ if (unlikely(!rq->nr_running))
+ return 0;
+ else
+ return rq->load.weight / rq->nr_running;
}

#ifdef CONFIG_FAIR_GROUP_SCHED
_



--
Balbir
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/