Re: Software raid0 will crash the file-system, when each disk is5TB

From: david
Date: Wed May 16 2007 - 14:09:12 EST


On Wed, 16 May 2007, Andreas Dilger wrote:

On May 16, 2007 11:09 +1200, Jeff Zheng wrote:
We are using two 3ware disk array controllers, each of them is connected
8 750GB harddrives. And we build a software raid0 on top of that. The
total capacity is 5.5TB+5.5TB=11TB

We use jfs as the file-system, we have a test application that write
data continuously to the disks. After writing 52 10GB files, jfs
crashed. And we are not able to recover it, fsck doesn't recognise it
anymore.
We then tried xfs, same application, lasted a little longer, but gives
kernel crash later.

Check if your kernel has CONFIG_LBD enabled.

The kernel doesn't check if the block layer can actually write to
a block device > 2TB.

my experiance is taht if you don't have CONFIG_LBD enabled then the kernel will report the larger disk as 2G and everything will work, you just won't get all the space.

plus he seems to be crashing around 500G of data

and finally (if I am reading the post correctly) if he configures the drives as 4x2.2TB=11TB instead of 2x5.5TB=11TB he doesn't have the same problem.

I'm getting ready to setup a similar machine that will have 3x10TB (3 15 disk arrays with 750G drives), but won't be ready to try this for a few more days.

David Lang
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/