Re: Poor Software RAID-0 performance with 2.6.14.2

From: Neil Brown
Date: Mon Nov 21 2005 - 16:56:58 EST


On Monday November 21, lroland@xxxxxxxxx wrote:
> I have created a stripe across two 500Gb disks located on separate IDE
> channels using:
>
> mdadm -Cv /dev/md0 -c32 -n2 -l0 /dev/hdb /dev/hdd
>
> the performance is awful on both kernel 2.6.12.5 and 2.6.14.2 (even
> with hdparm and blockdev tuning), both bonnie++ and hdparm (included
> below) shows a single disk operating faster than the stripe:
>
> ----
> dkstorage01:~# hdparm -t /dev/md0
> /dev/md0:
> Timing buffered disk reads: 182 MB in 3.01 seconds = 60.47 MB/sec
>
> dkstorage02:~# hdparm -t /dev/hdc1
> /dev/hdc1:
> Timing buffered disk reads: 184 MB in 3.02 seconds = 60.93 MB/sec
> ----

Could you try hdparm tests on the two drives in parallel?
hdparm -t /dev/hdb & hdparm -t /dev/hdd

It could be that the controller doesn't handle parallel traffic very
well.


>
> I am aware of cpu overhead with software raid but such a degradation
> should not be the case with raid 0, especially not when the OS is
> located on a separate SCSI disk - the IDE disks should just be ready
> to work.

raid0 has essentially 0 cpu overhead. It would be maybe a couple of
hundred instructions which would be lost in the noise. It just
figures out which drive each request should go to, and directs it
there.


>
> There have been some earlier reporting on this problem but they all
> seam to end more and less inconclusive (here is one
> http://kerneltrap.org/node/4745). Some people favors switching to
> dmraid with device mapper, is this the de facto standard today ?
>

The kerneltrap reference is about raid5.
raid5 is implemented very differently to raid0.

It might be worth experimenting with different read-ahead values using
the 'blockdev' command. Alternately use a larger chunk size.

I don't think there is a de facto standard. Many people use md. Many
use dm.

NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/