Re: 2.6.12-rc6-mm1

From: Con Kolivas
Date: Sat Jun 11 2005 - 00:24:18 EST


On Sat, 11 Jun 2005 14:14, Martin J. Bligh wrote:
> --"Martin J. Bligh" <mbligh@xxxxxxxxxx> wrote (on Friday, June 10, 2005 > >
OK, I backed out those 4, and the degredation mostly went away.
> > See
> > http://ftp.kernel.org/pub/linux/kernel/people/mbligh/abat/perf/kernbench.
> >moe.png
> >
> > and more specifically, see the +p5150 near the right hand side.
> > I don't think it's quite as good as mainline, but much closer.
> > I did this run with HZ=1000, and the the one with no scheduler
> > patches at all with HZ=250, so I'll try to do a run that's more
> > directly comparable as well
>
> OK, that makes it look much more like mainline. Looks like you were still
> revising the details of your patch Con ... once you're ready, drop me a
> URL for it, and I'll make the system whack on that too.

Great thanks. Here are rolled up all the reconsidered changes that apply
directly to 2.6.12-rc6-mm1 -purely for testing purposes-. I'd be very
grateful to see how this performed; it has been boot and stress tested at
this end. If it shows detriment I'll have to make the smp nice changes more
complex.

Cheers,
Con
Index: linux-2.6.12-rc6-mm1/kernel/sched.c
===================================================================
--- linux-2.6.12-rc6-mm1.orig/kernel/sched.c 2005-06-10 23:56:56.000000000 +1000
+++ linux-2.6.12-rc6-mm1/kernel/sched.c 2005-06-11 11:48:09.000000000 +1000
@@ -978,7 +978,7 @@ static inline unsigned long __source_loa
else
source_load = min(cpu_load, load_now);

- if (idle == NOT_IDLE || rq->nr_running > 1)
+ if (rq->nr_running > 1 || (idle == NOT_IDLE && rq->nr_running))
/*
* If we are busy rebalancing the load is biased by
* priority to create 'nice' support across cpus. When
@@ -987,7 +987,7 @@ static inline unsigned long __source_loa
* prevent idle rebalance from trying to pull tasks from a
* queue with only one running task.
*/
- source_load *= rq->prio_bias;
+ source_load = source_load * rq->prio_bias / rq->nr_running;

return source_load;
}
@@ -1011,8 +1011,8 @@ static inline unsigned long __target_loa
else
target_load = max(cpu_load, load_now);

- if (idle == NOT_IDLE || rq->nr_running > 1)
- target_load *= rq->prio_bias;
+ if (rq->nr_running > 1 || (idle == NOT_IDLE && rq->nr_running))
+ target_load = target_load * rq->prio_bias / rq->nr_running;

return target_load;
}