[PATCH] sched: improve stability of smpnice load balancing
From: Peter Williams
Date: Wed Mar 29 2006 - 17:26:34 EST
Problem:
Due to an injudicious piece of code near the end of find_busiest_group()
smpnice load balancing is too aggressive resulting in excessive movement
of tasks from one CPU to another.
Solution:
Remove the offending code. The thinking that caused it to be included
became invalid when find_busiest_queue() was modified to use average
load per task (on the relevant run queue) instead of SCHED_LOAD_SCALE
when evaluating small imbalance values to see whether they warranted
being moved.
Signed-off-by: Peter Williams <pwil3058@xxxxxxxxxxxxxx>
Peter
--
Peter Williams pwil3058@xxxxxxxxxxxxxx
"Learning, n. The kind of ignorance distinguishing the studious."
-- Ambrose Bierce
Index: MM-2.6.X/kernel/sched.c
===================================================================
--- MM-2.6.X.orig/kernel/sched.c 2006-03-29 16:18:37.000000000 +1100
+++ MM-2.6.X/kernel/sched.c 2006-03-29 16:20:37.000000000 +1100
@@ -2290,13 +2290,10 @@ find_busiest_group(struct sched_domain *
pwr_move /= SCHED_LOAD_SCALE;
/* Move if we gain throughput */
- if (pwr_move > pwr_now)
- *imbalance = busiest_load_per_task;
- /* or if there's a reasonable chance that *imbalance is big
- * enough to cause a move
- */
- else if (*imbalance <= busiest_load_per_task / 2)
+ if (pwr_move <= pwr_now)
goto out_balanced;
+
+ *imbalance = busiest_load_per_task;
}
return busiest;