[145/264] md/raid10: Fix bug when activating a hot-spare.

From: Greg KH
Date: Wed Nov 09 2011 - 22:42:46 EST


3.1-stable review patch. If anyone has any objections, please let me know.

------------------

From: NeilBrown <neilb@xxxxxxx>

commit 7fcc7c8acf0fba44d19a713207af7e58267c1179 upstream.

This is a fairly serious bug in RAID10.

When a RAID10 array is degraded and a hot-spare is activated, the
spare does not take up the empty slot, but rather replaces the first
working device.
This is likely to make the array non-functional. It would normally
be possible to recover the data, but that would need care and is not
guaranteed.

This bug was introduced in commit
2bb77736ae5dca0a189829fbb7379d43364a9dac
which first appeared in 3.1.

Signed-off-by: NeilBrown <neilb@xxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxx>

---
drivers/md/raid10.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -1337,7 +1337,7 @@ static int raid10_add_disk(mddev_t *mdde
mirror_info_t *p = &conf->mirrors[mirror];
if (p->recovery_disabled == mddev->recovery_disabled)
continue;
- if (!p->rdev)
+ if (p->rdev)
continue;

disk_stack_limits(mddev->gendisk, rdev->bdev,


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/