I don't know why the single-stream case would be slower, but the two-stream10MB/s is just because I did the test on an old machine, it maxes out at 15MB/s with "hdparm -t".
case is probably due to writeback changes interacting with a weakness in
the block allocator. 10 megs/sec is pretty awful either way.
Either way, you have intermingled blocks in the files.Yes the blocks are intermingled. Thanks for the explanation of the 2.4/2.6 difference.
Reads will be slower too - you will probably find that reading back a fileYes reads are 50% for 2 streams, 25% for 4 etc. 2.4 and 2.6 perform the same.
You can probably address it quite well within theWrites in the 256kB - 1MB region do avoid the problem. Unfortunately the way the application is written it makes this tricky to do. It wants to write out the data in one frame at a time, typically 10 - 50kB.
application itself by buffering up a good amount of data for each write()
call. Maybe a megabyte.
XFS will do well at this.Yes, both XFS and JFS perform much better. Here is a summary of some tests done on 2.6, these were done on a faster machine / disk combination. This was the original test program which also measured the read speeds, you can get this from http://www.jburgess.uklinux.net/slow.c
You might be able to improve things significantly on ext2 by increasingI'll give it a go.
EXT2_DEFAULT_PREALLOC_BLOCKS by a lot - make it 64 or 128. I don't recall
anyone trying that.
But I must say, a 21x difference is pretty wild. What filesytem was thatThe results were from running the test program I attached to the original email. It was writing 4kB at a time on a ext2 filesystem. It tries to write the data in a tight loop, taking as much bandwidth as it can get.
with, and how much memory do you have, and what was the bandwidth of each
stream, and how much data is the application passing to write()?