Re: Interesting scheduling times - NOT

Larry McVoy (lm@bitmover.com)
Fri, 25 Sep 1998 10:00:33 -0600


Jamie Lokier <lkd@tantalophile.demon.co.uk>:
: A project I work with has all but dismissed the kernel context switching
: and interrupt latency & overhead as far too slow

That's interesting. It's especially interesting because Linux' context
switch times are the best of any Unix variant and on par with the various
embedded and real time operating systems.

I can believe that interrupt latency is a problem. I can also believe
that context swtiching is a problem, but you ought to think about the
parameters there a bit before you think that Richard's proposed changes
are going to help you. One thing that got lost in the noise of this
discussion is that Richard is measuring a pretty unrealistic point - it
is what I call "zero sized processes", i.e., the only code/data being
referenced is that code necessairy to force a context switch. That's the
lightest weight context switch that you can get. It is not a very useful
number because in real life processes don't run benchmarks, they do work,
and that work will dramatically increase your context switch time.

lmbench tries to take this into account and runs 2..N process of 0..64K
size. When you plot the results, you see that the numbers quickly go
from ~2 usecs to 10s or even 100s of usecs.

: If it's really kernel problems, we need to know!

So it's great that you can move things around in the task struct and get
one less cache miss/context switch. However, as both David & I have said,
it doesn't make any difference to any applications, it's completely lost
in the noise. That doesn't mean you shouldn't do it but nor does it mean
you should. All it means is that nobody has shown an application which
can see any difference.

: Larry, just have more faith in Linus. If Richard's code is crap, it'll
: be rejected. If it makes the scheduler simpler by grouping the RT
: special cases together, and fixes some bugs, and Richard's happy with
: it, and Linus is happy with it, where's the harm? Even if Richard's
: variances do turn out to be an artefact.

The harm is in making changes at all without being able to justify them.
The justification has to be an application, not a benchmark. Benchmarks
for benchmarks sake are self serving.

So, I'd say it like this: "If anyone can show that the changes actually
make a positive difference for a real application, then by all means
get 'em to Linus and have him put them in". On the other hand, if the
changes show no positive difference to any application, why the heck
are we wasting our time with this change?

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/