2.0.34 / buffer.c / _large_ dirty lists.

Michael O'Reilly (michael@metal.iinet.net.au)
09 Jun 1998 14:46:32 +0800


I've got a machine that regularly sits with a 200 - 300 meg disk
cache, and 20 - 40 meg of dirty buffers, and 30 odd very busy disk
readers and writers. One of the big problems with this is that it
takes a _long_ time for sync() to run (i.e. 90 minutes!) because the
dirty blocks are being generated about as fast as they're getting
written to disk.

Now that locality of writes doesn't actually appear to be that bad,
but with such a high number of dirty buffers, and a request queue
that's only 128 items long, its fairly unlikely that close dirty
buffers get joined into the same request.

My reading is that ll_rw_block(WRITEA .. ) will attempt to take a
buffer, join it to any current request else create a new request if
there's one free, else exit without doing anything.

My question: Is it safe to do something like

foreach dirty_buffer
ll_wr_block(WRITEA, ... );

as soon as the request queue fills? The idea being to find all the
dirty buffers that could be part of the current request list, and join
them in there, in an attempt to decrease the numbers of seeks required
to get the buffers to disk.

Is this a good idea?

Another: How sane is it to dramatically increase the size of the
request queue? (i.e. 1024 instead of 128?)

The other question is: This machine is actually working on a 7 disk
array. It seems to frequently happen that the request queue fills with
a bunch of requests for one disk, preventing other disks from doing
anything even tho they're idle. Any ideas on this one?


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu