Re: [PATCH] Load balancing problem in 2.6.2-mm1

From: Nick Piggin
Date: Fri Feb 06 2004 - 19:13:03 EST




Martin J. Bligh wrote:

If CPU 8 has 2 tasks, and cpu 1 has 1 task, there's an imbalance of 1.
*If* that imbalance persists (and it probably won't, given tasks being
created, destroyed, and blocking for IO), we may want to rotate that to 1 vs 2, and then back to 2 vs 1, etc. in the interests of fairness,
even though it's slower throughput overall.


Yes, although as long as it's node local and happens a couple of
times a second you should be pretty hard pressed noticing the
difference.


Not sure how true that turns out to be in practice ... probably depends
heavily on both the workload (how heavily it's using the cache) and the
chip (larger caches have proportionately more to lose).

As we go forward in time, cache warmth gets increasingly important, as
CPUs accelerate speeds quicker than memory. Cache sizes also get larger.
I'd really like us to be conservative here - the unfairness thing is really hard to hit anyway - you need a static number of processes that
don't ever block on IO or anything.



Can we keep current behaviour default, and if arches want to
override it they can? And if someone one day does testing to
show it really isn't a good idea, then we can change the default.

I like to try stick to the fairness first approach.

We got quite a few complaints about unfairness when the
scheduler used to keep 2 on one cpu and 1 on another, even in
development kernels. I suspect that most wouldn't have known
one way or the other if only top showed 66% each, but still.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/