On Wed, Aug 23 2000, Linus Torvalds wrote:
> Neil Brown <neilb@cse.unsw.edu.au> wrote:
> >
> >Not necessarily. When a request is freed, the oldest waiting thread
> >is woken, but it might not actually get to run before some other
> >thread steals the request. You could force a strict ordering if you
> >really wanted to, but I don't know how much it would help. See
> >STRICT_REQUEST_ORDERING in the patch below.
>
> Neil, I suspect the request ordering is secondary, and the real problem
> is that at some point we get into this awful steady state where we
> create new requests at the same pace as we get rid of old ones, and we
> always end up waiting for the next request to be free'd.
That is easy to do, just flood the queue and this will happen. I've
watched it happen.
> The "always end up waiting" thing means that we won't do a good job on
> read-ahead etc (because suddenly all our request stuff will be
> synchronous wrt the disk), so it _would_ impact performace. I think.
>
> Making the request ordering stricter won't help with this situation - it
> just makes it more fairly badly behaved. What _should_ help is to
> "batch" the freeing of requests, so that you don't end up waking up
> anybody (and everybody blocks on the requests being empty) until you've
> free'd up, say, half of the request queue again.
I did a quick patch doing just this -- and a just as quick dbench showed
some improvement when batching freeing of requests of 64 at the time:
128M RAM used
test7 stock (plus Neil's remerge-on-block patch, I liked it)
burns:/mnt # ./dbench 48
48 clients started
Throughput 20.224 MB/sec (NB=25.2799 MB/sec 202.24 MBit/sec)
test7 with QUEUE_NR_REQUESTS >> 2 batched frees (+ Neil's patch, of course)
burns:/mnt # ./dbench 48
48 clients started
Throughput 23.482 MB/sec (NB=29.3525 MB/sec 234.82 MBit/sec)
Patch attached, if anyone else wants to give it a go.
-- * Jens Axboe <axboe@suse.de> * SuSE Labs
This archive was generated by hypermail 2b29 : Thu Aug 31 2000 - 21:00:13 EST