On Tue, Aug 19, 2003 at 01:06:49PM +1000, Nick Piggin wrote:
Its done this way because this is really how the priorities are
enforced. With some complicated exceptions, every task will be
allowed to complete 1 timeslice before any task completes 2
(assuming they don't block).
So higher priority tasks need bigger timeslices.
also, i think dynamic priority should be used for timeslice calculationAmong other things, yes, I think this is a good idea too. I'll be
instead of static priority. the reason is, if a low priority task get a
priority boost (to prevent starvation, for example) it should use the
small timeslice corresponding to it's new priority level, instead of
using it's original large timeslice that can ruin the interactive feel.
addressing both these issues in my scheduler fork.
I do have dynamic timeslices, but currently high priority tasks
still get big timeslices.
TS = Time Slice
What needs to be changed is the 1TS per pass through the active array
concept.
Devide the time slice into smaller Time Units, so that you can add one unit
per priority level.
TU = Time Units
Then you account these TUs instead of slices.
so, if nice -19 has 1 TU, and nice -19 has 40 TUs (maybe ranging from 1ms -
200ms with a TU of 5ms).
So nice -19 can have a long time slice and run until it expires if it
doesn't get preempted.
The more I think this through, the harder it gets to take this concept to
completion, but the basic idea is to have multiple TSes per slice, and to
account on TSes as well as slices. That way, you have longer slices for
nicer tasks, but I'm not sure how it would fit into the two array scheduler
we have now. You'd have to have another list for the processes that are
have used up their slice, but haven't waited long enough for them to get
another slice (because you want to give more total CPU percentage to the
higher priorities, while still giving them smaller slices).
Anyway, if anyone can take this idea and make it into a working scheduler,
I'd be most interested in the results...