But the duration of a "pass" is not fixed. One user doing "cat bigfile >
/dev/null" can monopolize the whole disk until it's done catting
bigfile; a database process (or three) generating many random accesses to
the database file would produce a similar mess, perhaps of greater
duration relative to their size because of all the time spent seeking and
waiting for the disk to spin. Meanwhile, the rest of the requests sit and
wait, and who cares if they're guaranteed to be handled "this pass" when
that's 10+ seconds from now...
It's clear that you do have to impose some sort of cutoff at some point.
What that is, I can't say for sure. But how come ls -R and du don't hang
the system nowadays?
Keith
-- "The avalanche has already started; |Linux: http://www.linuxhq.com |"Zooty, it is too late for the pebbles to |KDE: http://www.kde.org | zoot vote." Kosh, "Believers", Babylon 5 |Keith: kwrohrer@enteract.com | zoot!" www.midwinter.com/lurk/lurker.html |http://www.enteract.com/~kwrohrer | --Rebo- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/