Srivatsa,You can't measure work-conservation by summing! Everything is ran _concurrently_. A proof of losing computing power is to show "MAX(new_algorithm execution_times) > MAX(old_algorithm execution_times)". Anyway... it still seems lots of power is lost: MAX(766,766) >> MAX(472,257).
Current Linux CPU scheduler doesnt recognize process aggregates while1. If I interpret these numbers correctly, then your scheduler is not work-conservative,
allocating bandwidth. As a result of this, an user could simply spawn large number of processes and get more bandwidth than others.
Here's a patch that provides fair allocation for all users in a system.
Some benchmark numbers with and without the patch applied follows:
user "vatsa" user "guest"
(make -s -j4 bzImage) (make -s -j20 bzImage)
2.6.20-rc5 472.07s (real) 257.48s (real)
2.6.20-rc5+fairsched 766.74s (real) 766.73s (real)
i.e. 766.74 + 766.73 >> 472.07 + 257.48
why does it slow down users so much?
2. compilation of kernel is quite CPU-bound task. So it's not that hard to be fair :)Another worthy benchmark would be :
Can you please try some other applications?
e.g. pipe-based context switching, java Volano benchmark etc.