Re: a different approach to scheduling issues

Rik van Riel (H.H.vanRiel@phys.uu.nl)
Fri, 2 Oct 1998 11:00:34 +0200 (CEST)


On Thu, 1 Oct 1998, Jeremy Fitzhardinge wrote:

> > Hmm. Although it is an interesting idea, it kinda conflicts
> > with the next part of your message...
>
> How so?
>
> >> Every now and then (a tunable which depends on how responsive you
> >> need your dynamic priorities to be) you go through and recalculate
> >> the relative importance of your running processes.
> >
> > This is in my patch. It recalculates the importance of
> > only the running processes as soon as each process on
> > the runqueue has used up it's timeslice.
>
> That's a little more often than I'm talking about. You divide your
> scheduler into two parts: one which makes sure processes get CPU at
> a certain rate, and another which assigns what rate processes should
> get CPU at. The former is a pretty simple procedure which can be
> implemented by controlling the placement of processes on the
> run-queue, and has to be performed whenever something gets put on
> the run-queue.

There's only one catch with this. Processes are put
on the runqueue so often that the recalculation of
running processes is almost negligable. Every letter
I typed goes through X, xterm, then to pine and after
that it goes back up again. This will have 3 processes
put on and removed from the runqueue, 2 of 'em 2 times.

> The latter process can be done less frequently, and calculated based
> on the behaviour of the process since the last calculation (and
> other factors). By "less frequently", I mean somewhere in the order
> of .1-10 seconds, depending on what the nature of the machine's load
> is.

Determining the load and basing a decision on that data
will be about as expensive as the not-recalculating when
the process wakes up again in the same jiffie. It's one
comparison and a jump in the best case. In the worst case
it is followed by 2 assignments, and a comparison.

Besides, your "calculating the nature of the machine's load"
might take up so long that we'd increase the worst-case
latency to unacceptable leves.

It's not just about total overhead, it's also about spreading
out all stuff in order to avoid a bad worst-case.

Rik.
+-------------------------------------------------------------------+
| Linux memory management tour guide. H.H.vanRiel@phys.uu.nl |
| Scouting Vries cubscout leader. http://www.phys.uu.nl/~riel/ |
+-------------------------------------------------------------------+

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/