Re: fairsched + O(1) process scheduler

From: Antonio Vargas (wind@cocodriloo.com)
Date: Thu Apr 03 2003 - 07:53:55 EST


On Wed, Apr 02, 2003 at 11:44:21AM -0500, Hubertus Franke wrote:
> On Tuesday 01 April 2003 17:19, Antonio Vargas wrote:
> > On Tue, Apr 01, 2003 at 08:41:26AM -0800, William Lee Irwin III wrote:
> > > On Tue, Apr 01, 2003 at 02:51:59PM +0200, Antonio Vargas wrote:
>
> To add to my previous reply...
> In your approach you might have a problem with starvation.
> High priority processes simply can eat all the ticks allocated to
> a user and when lower priority tasks pop up at the top of
> the active list they are move to the inactive list. This could happen
> over and over again, thus starving those tasks.

Sorry for not answering sooner, but I had my mail pointing at
the lkml instead of my personal email.

Last night I continued coding up but I think I've it a deadlock:
it seems the system stops calling schedule_tick() when there are
no more tasks to run, and it's there where I use a workqueue so
that I can update the user timeslices out of interrupt context.

I think that because if I put printk's on my code, it continues to
run OK, but if I remove them it freezes hard.

About giving ticks to users, I've got an idea: assuming there are N
total processes in the system and an user has got M processes, we should
give N * max_timeslice ticks to each user every M * max_timeslice *
num_users ticks. I've been thinking about it and it seems it would
balance the ticks correctly.

About the starvation for low-priority processes, I can get and
example.. assume user A has process X Y and user B has process Z,
and max_timeslice = 2:

max_timeslice = 2 and 4 total processes ==>
give 2 * 3 = 6 ticks to user A every 2 * 2 * 2 = 8 ticks
give 2 * 3 = 6 ticks to user B every 1 * 2 * 2 = 4 ticks

running revised nr. of ticks for each
task user for each entity after executing

                        A B X Y Z
- - 6 6 2 2 2
X A1 5 6 1 2 2
X B1 4 6 0 2 2
Y A2 3 6 0 1 2
Y B2 2 6 0 0 2
Z A3 2 5 0 0 1
Z B3 2 4 0 0 0

reset_tasks - 2 4 2 2 2

X A4 1 4 1 2 2
X B4 0 4 0 2 2

reset_user B4 0 10 0 2 2

Z A5 0 9 0 2 1
Z B1 0 8 0 2 0

reset_tasks - 0 8 2 2 2

Z A6 0 7 2 2 1
Z B2 0 6 2 2 0

reset_tasks - 0 6 2 2 2

Z A7 0 5 2 2 1
Z B3 0 4 2 2 0

reset_tasks - 0 4 2 2 2

Z A8 0 3 2 2 1

reset_user A8 6 3 2 2 1

Z B4 6 2 2 2 0

reset_user B4 6 8 2 2 0

A A1 5 8 1 2 0
A B1 4 8 0 2 0
B A2 3 8 0 1 0
B B2 2 8 0 0 0

reset_tasks - 2 8 2 2 2

Z A3 2 7 2 2 1
Z B3 2 6 2 2 0
A A4 1 6 1 2 0
A B4 0 6 0 2 0

... ad infinitum ...

probably getting a graph out of this would be good :)

I will attach my current not-so-working patch and will try
this scheme later at night.

Note there are some spourius calls to run my workqueue due to debugging
tests.

Greets, Antonio.



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Mon Apr 07 2003 - 22:00:19 EST