Tigran Aivazian wrote:
>
> Hi guys,
>
> Whilst we (on this cpu) are going through the runqueue and selecting the
> process with the highest goodness, someone else (schedule() running on
> another cpu) could be going through the entire set of processes and
> recalculating their dynamic priorities (p->counter) because schedule drops
> runqueue_lock at recalculate label in schedule(), presumably for
> performance reasons, i.e. to let another schedule() execute as soon as
> possible while we may be spending ages in for_each_task() loop.
>
> Isn't this inconsistent? This means that what is selected as "highest
> goodness" on this cpu is not necessarily the "fair value" because it is
> based on wrong values of p->counter of the tasks examined (i.e. on the
> runqueue).
>
> Any comments?
I see two problems. I don't see what prevents the second (and third...)
CPU from trying to do the same thing, i.e. while CPU 1 is running over
all the tasks at the front of the list (non of which are likely to be in
the run list) CPU 2 discovers that the same recalculation needs to be
done. If I am correct the read_lock will not stop it and CPU 2 will run
down the same list doing the same thing.
The other possible outcome is the one you point out. Since all the
tasks in the run list had zero counters, save some "nice"ness, they will
all get the same new count, thus it should not matter which is picked to
run. The only problem is a possible violation of any "nice"ness.
I have been toying around with the notion of just updating the tasks in
the run list and deferring the update of all the rest until they are put
back in the run list. This would make the recalculation _much_ faster
and possibly it could be done without dropping the runqueue_lock. The
biggest draw back to it I see is that the "fair scheduler" becomes way
more difficult to implement.
George
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Mon Jun 26 2000 - 21:00:09 EST