On Fri, Mar 19, 2010 at 6:20 PM, Joachim Otahal <Jou@xxxxxxx <mailto:Jou@xxxxxxx>> wrote:You are probably right.
jin zhencheng schrieb:
hi;
i use kernel is 2.6.26.2
what i do as follow:
1, I create a raid5:
mdadm -C /dev/md5 -l 5 -n 4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
--metadata=1.0 --assume-clean
2, dd if=/dev/zero of=/dev/md5 bs=1M&
write data to this raid5
3, mdadm --manage /dev/md5 -f /dev/sda
4 mdadm --manage /dev/md5 -f /dev/sdb
if i faild 2 disks ,then the OS kernel display OOP error and
kernel down
do somebody know why ?
Is MD/RAID5 bug ?
RAID5 can only tolerate ONE drive to fail of ALL members. If you
want to be able to fail two drives you will have to use RAID6 or
RAID5 with one hot-spare (and give it time to rebuild before
failing the second drive).
PLEASE read the documentation on raid levels, like on wikipedia.
That is true,
but should we get a kernel oops and crash if two RAID5 drives are failed? (THAT part looks like a bug!)
Jin, can you try a newer kernel, and a newer mdadm?
-- Kristleifur