Chris Snook wrote:Bill Davidsen wrote:Chris Snook wrote:dd uses unaligned stack-allocated buffers, and defaults to block sizedEmmanuel Florac wrote:What do you use as a benchmark for writing large sequential files orI post there because I couldn't find any information about thisIt means you shouldn't use dd as a benchmark.
elsewhere : on the same hardware ( Athlon X2 3500+, 512MB RAM, 2x400 GB
Hitachi SATA2 hard drives ) the 2.4 Linux software RAID-1 (tested
and 22.214.171.124, slightly patched to recognize the hardware :p) is way
faster than 2.6 ( tested 126.96.36.199, 188.8.131.52, 184.108.40.206, 220.127.116.11)
especially for writes. I actually made the test on several different
machines (same hard drives though) and it remained consistent across
the board, with /mountpoint a software RAID-1.
Actually checking disk activity with iostat or vmstat shows clearly a
cache effect much more pronounced on 2.4 (i.e. writing goes on much
longer in the background) but it doesn't really account for the
difference. I've also tested it thru NFS from another machine (Giga
dd if=/dev/zero of=/mountpoint/testfile bs=1M count=1024
kernel 2.4 2.6 2.4 thru NFS 2.6 thru NFS
write 90 MB/s 65 MB/s 70 MB/s 45 MB/s
read 90 MB/s 80 MB/s 75 MB/s 65 MB/s
Duh. That's terrible. Does it mean I should stick to (heavily
patched...) 2.4 for my file servers or... ? :)
reading them, and why is it better than dd at modeling programs which
read or write in a similar fashion?
Media programs often do data access in just this fashion,
multi-channel video capture, streaming video servers, and similar.
I/O. To call this inefficient is a gross understatement. Modern
applications which care about streaming I/O performance use large,
aligned buffers which allow the kernel to efficiently optimize things,
or they use direct I/O to do it themselves, or they make use of system
calls like fadvise, madvise, splice, etc. that inform the kernel how
they intend to use the data or pass the work off to the kernel
completely. dd is designed to be incredibly lightweight, so it works
very well on a box with a 16 MHz CPU. It was *not* designed to take
advantage of the resources modern systems have available to enable
I suggest an application-oriented benchmark that resembles the
application you'll actually be using.
I was trying to speed up an app¹ I wrote which streams parts of a large file,
to separate files, and tested your advice above (on ext3 on 18.104.22.168-85.fc8).
I tested reading blocks of 4096, both to stack and page aligned buffers,
but there were negligible differences between the CPU usage between the
aligned and non-aligned buffer case.
I guess the kernel could be clever and only copy the page to userspace
on modification in the page aligned case, but the benchmarks at least
don't suggest this is what's happening?
What difference exactly should be expected from using page aligned buffers?
Note I also tested using mmap to stream the data, and there is a significant
decrease in CPU usage in user and kernel space as expected due to the
data not being copied from the page cache.