Re: BUG:write data to degrade raid5

From: Joachim Otahal
Date: Fri Mar 19 2010 - 14:37:30 EST


Kristleifur Daðason schrieb:
On Fri, Mar 19, 2010 at 6:20 PM, Joachim Otahal <Jou@xxxxxxx <mailto:Jou@xxxxxxx>> wrote:

jin zhencheng schrieb:

hi;

i use kernel is 2.6.26.2

what i do as follow:

1, I create a raid5:
mdadm -C /dev/md5 -l 5 -n 4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
--metadata=1.0 --assume-clean

2, dd if=/dev/zero of=/dev/md5 bs=1M&

write data to this raid5

3, mdadm --manage /dev/md5 -f /dev/sda

4 mdadm --manage /dev/md5 -f /dev/sdb

if i faild 2 disks ,then the OS kernel display OOP error and
kernel down

do somebody know why ?

Is MD/RAID5 bug ?


RAID5 can only tolerate ONE drive to fail of ALL members. If you
want to be able to fail two drives you will have to use RAID6 or
RAID5 with one hot-spare (and give it time to rebuild before
failing the second drive).
PLEASE read the documentation on raid levels, like on wikipedia.


That is true,

but should we get a kernel oops and crash if two RAID5 drives are failed? (THAT part looks like a bug!)

Jin, can you try a newer kernel, and a newer mdadm?

-- Kristleifur
You are probably right.
My kernel version is "Debian 2.6.26-21lenny4", and I had no oopses during my hot-plug testing one the hardware I use md on. I think it may be the driver for his chips.

Jin:

Did you really use the whole drives for testing or loopback files or partitions on the drives? I never did my hot-plug testings with whole drives being in an array, only with partitions.

Joachim Otahal

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/