Re: Best Low-cost IDE RAID Solution For 2.6.x? (OT?)

From: bill davidsen
Date: Wed Jan 07 2004 - 18:42:39 EST


In article <Pine.LNX.4.44.0312281516060.21070-100000@xxxxxxxxxxxxxxxx>,
Joel Jaeggli <joelja@xxxxxxxxxxxxxxxxxxxx> wrote:

| on the low-end promise controllers (ie everything but the sx4000 and
| sx6000) it's not hardware raid, it's the driver doing raid in this case
| the promise fastrack driver. among other things their driver doesn't do
| raid 5, just 0 1 or 0+1. so software raid does raid 5 and more.
|
| to quote the config_blk_dev_ataraid:
|
| CONFIG_BLK_DEV_ATARAID:
| âSay Y or M if you have an IDE Raid controller and want linux
| âto use its softwareraid feature. You must also select an
| appropriate for your board low-level driver below.
|
| âNote, that Linux does not use the Raid implementation in BIOS, and
| âthe main purpose for this feature is to retain compatibility and
| âdata integrity with other OS-es, using the same disk array. Linux
| âhas its own Raid drivers, which you should use if you need better
| âperformance.
|
| I've also had two if the higher-end promise sx6000's and the suffering I
| incurred making them work with the i2o drivers particularly when promise
| revised the bioses on those cards means I can't recomend them for much.
|
| > raid instead, what would be the CPU overhead?
|
| The linux software raid layer is actually faster under most circumstances
| the hardware raid controllers, doing raid5. but since the promise can't
| actually do that (raid 5) anyway it's hard to compare them directly. the
| 3ware cards in my experience result in lower cpu utilization for i/o but
| there hardware tops out at 80-145MB/s depending on model and we have
| software raid subsystems that go faster than that at the expense of some
| of the rather bountiful cpu we have in those boxes.

The problem with the Linux software RAID is that's in the kernel. So if
the boot disk dies you don't get to boot and use the kernel. With the
controller RAID you do use the the RAID-1 copy, and boot, and sometimes
that's the feature which counts more than all the rest.

Now if you run LinuxBIOS you CAN do this all yourself, but unless you do
there are still some reliability issues.

Also note that if you do mirroring you use twice as much disk buffer
space to queue two writes, which could impact the performance in a low
memory system. That's theory, I haven't benchmarked.

Promise used to make a little adaptor which did RAID-1 on two drives and
made them look like one. This was not at the controller as I recall, but
in the cable, where the data was split. That would give boot reliability
without losing a device pair, but I have no idea how well it worked
other than in a demonstration.
--
bill davidsen <davidsen@xxxxxxx>
CTO, TMR Associates, Inc
Doing interesting things with little computers since 1979.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/