First, the original benchmarks with 6-SATA drives with fixedformatting,
usinghttp://home.comcast.net/~jpiszcz/20080607/raid-benchmarks-decimal-fix-and-right-justified/disks.html
right justification and the same decimal point precision throughout:
possible with
Now for for veliciraptors! Ever wonder what kind of speed is
3 disk, 4,5,6,7,8,9,10-disk RAID5s? I ran a loop to find out, eachrun is
executed three times and the average is taken of all three runs pereach
RAID5 disk set.chipset
In short? The 965 no longer does justice with faster drives, a new
and motherboard are needed. After reading or writing to 4-5veliciraptors
it saturates the bus/965 chipset.http://home.comcast.net/~jpiszcz/20080607/raid5-benchmarks-3to10-veliciraptors/raptors.jpg
Here is a picture of the 12 veliciraptors I tested with:
http://home.comcast.net/~jpiszcz/20080607/raid5-benchmarks-3to10-veliciraptors/veliciraptor-raid.html
Here are the bonnie++ results:
http://home.comcast.net/~jpiszcz/20080607/raid5-benchmarks-3to10-veliciraptors/veliciraptor-raid.txt
For those who want the results in text:
SW
System used, same/similar as before:
Motherboard: Intel DG965WH
Memory: 8GiB
Kernel: 2.6.25.4
Distribution: Debian Testing x86_64
Filesystem: XFS with default mkfs.xfs parameters [auto-optimized for
RAID]0 1
Mount options: defaults,noatime,nodiratime,logbufs=8,logbsize=262144
Chunk size: 1024KiB
RAID5 Layout: Default (left-symmetric)
Mdadm Superblock used: 0.90
Optimizations used (last one is for the CFQ scheduler), it improves
performance by a modest 5-10MiB/s:
http://home.comcast.net/~jpiszcz/raid/20080601/raid5.html
# Tell user what's going on.
echo "Optimizing RAID Arrays..."
# Define DISKS.
cd /sys/block
DISKS=$(/bin/ls -1d sd[a-z])
# Set read-ahead.
# > That's actually 65k x 512byte blocks so 32MiB
echo "Setting read-ahead to 32 MiB for /dev/md3"
blockdev --setra 65536 /dev/md3
# Set stripe-cache_size for RAID5.
echo "Setting stripe_cache_size to 16 MiB for /dev/md3"
echo 16384 > /sys/block/md3/md/stripe_cache_size
# Disable NCQ on all disks.
echo "Disabling NCQ on all disks..."
for i in $DISKS
do
echo "Disabling NCQ on $i"
echo 1 > /sys/block/"$i"/device/queue_depth
done
# Fix slice_idle.
# See http://www.nextre.it/oracledocs/ioscheduler_03.html
echo "Fixing slice_idle to 0..."
for i in $DISKS
do
echo "Changing slice_idle to 0 on $i"
echo 0 > /sys/block/"$i"/queue/iosched/slice_idle
done