On Tue, Mar 24, 2009 at 12:32 AM, Jesper Krogh <jesper@xxxxxxxx> wrote:David Rees wrote:
The 480 secondes is not the "wait time" but the time gone before the
message is printed. It the kernel-default it was earlier 120 seconds but
thats changed by Ingo Molnar back in september. I do get a lot of less
noise but it really doesn't tell anything about the nature of the problem.
The systes spec:
32GB of memory. The disks are a Nexsan SataBeast with 42 SATA drives in
Raid10 connected using 4Gbit fibre-channel. I'll let it up to you to decide
if thats fast or slow?
The drives should be fast enough to saturate 4Gbit FC in streaming
writes. How fast is the array in practice?
The strange thing is actually that the above process (updatedb.mlocate) is
writing to / which is a device without any activity at all. All activity is
on the Fibre Channel device above, but process writing outsid that seems to
be effected as well.
Ah. Sounds like your setup would benefit immensely from the per-bdi
patches from Jens Axobe. I'm sure he would appreciate some feedback
from users like you on them.
What's your vm.dirty_background_ratio and2.6.29-rc8 defaults:
vm.dirty_ratio set to?
jk@hest:/proc/sys/vm$ cat dirty_background_ratio
5
jk@hest:/proc/sys/vm$ cat dirty_ratio
10
On a 32GB system that's 1.6GB of dirty data, but your array should be
able to write that out fairly quickly (in a couple seconds) as long as
it's not too random. If it's spread all over the disk, write
throughput will drop significantly - how fast is data being written to
disk when your system suffers from large write latency?