Re: [ck] Re: Linus 2.6.23-rc1

From: Linus Torvalds
Date: Sat Jul 28 2007 - 14:05:55 EST




On Sat, 28 Jul 2007, Jan Engelhardt wrote:
>
> You cannot please everybody in the scheduler question, that is clear,
> then why not offer dedicated scheduling alternatives (plugsched comes to mind)
> and let them choose what pleases them most, and handles their workload best?

This is one approach, but it's actually one that I personally think is
often the worst possible choice.

Why? Because it ends up meaning that you never get the cross-pollination
from different approaches (they stay separate "modes"), and it's also
usually really bad for users in that it forces the user to make some
particular choice that the user is usually not even aware of.

So I personally think that it's much better to find a setup that works
"well enough" for people, without having modal behaviour. People complain
and gripe now, but what people seem to be missing is that it's a journey,
not an end-of-the-line destination. We haven't had a single release kernel
with the new scheduler yet, so the only people who have tried it are
either

(a) interested in schedulers in the first place (which I think is *not* a
good subset, because they have very specific expectations of what is
right and what is wrong, and they come into the whole thing with that
mental baggage)

(b) people who test -rc1 kernels (I love you guys, but sadly, you're not
nearly as common as I'd like ;)

so the fact is, we'll find out more information about where CFS falls
down, and where it does well, and we'll be able to *fix* it and tweak it.

In contrast, if you go for a modal approach, you tend to always fixate
those two modes forever, and you'll never get something that works well:
people have to switch modes when they switch workloads.

[ This, btw, has nothing to do with schedulers per se. We have had these
exact same issues in the memory management too - which is a lot more
complex than scheduling, btw. The whole page replacement algorithm is
something where you could easily have "specialized" algorithms in order
to work really well under certain loads, but exactly as with scheduling,
I will argue that it's a lot better to be "good across a wide swath of
loads" than to try to be "perfect in one particular modal setup". ]

This is also, btw, why I think that people who argue for splitting desktop
kernels from server kernels are total morons, and only show that they
don't know what the hell they are talking about.

The fact is, the work we've done on server loads has improved the desktop
experience _immensely_, with all the scalability work (or the work on
large memory configurations, etc etc) that went on there, and that used to
be totally irrelevant for the desktop.

And btw, the same is very much true in reverse: a lot of the stuff that
was done for desktop reasons (hotplug etc) has been a _huge_ boon for the
server side, and while there were certainly issues that had to be resolved
(the sysfs stuff so central to the hotplug model used tons of memory when
you had ten thousand disks, and server people were sometimes really
unhappy), a lot of the big improvements actually happen because somethng
totally _unrelated_ needed them, and then it just turns out that it's good
for the desktop too, even if it started out as a server thing or vice
versa.

This is why the whole "modal" mindset is stupid. It basically freezes a
choice that shouldn't be frozen. It sets up an artificial barrier between
two kinds of uses (whether they be about "server" vs "desktop" or "3D
gaming" vs "audio processing", or anything else), and that frozen choice
actually ends up being a barrier to development in the long run.

So "modal" things are good for fixing behaviour in the short run. But they
are a total disaster in the long run, and even in the short run they tend
to have problems (simply because there will be cases that straddle the
line, and show some of _both_ issues, and now *neither* mode is the right
one)

Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/