[037/127] md/raid1: really fix recovery looping when single good device fails.
From: Greg KH
Date: Tue Dec 07 2010 - 21:14:29 EST
2.6.32-stable review patch. If anyone has any objections, please let us know.
------------------
From: NeilBrown <neilb@xxxxxxx>
commit 8f9e0ee38f75d4740daa9e42c8af628d33d19a02 upstream.
Commit 4044ba58dd15cb01797c4fd034f39ef4a75f7cc3 supposedly fixed a
problem where if a raid1 with just one good device gets a read-error
during recovery, the recovery would abort and immediately restart in
an infinite loop.
However it depended on raid1_remove_disk removing the spare device
from the array. But that does not happen in this case. So add a test
so that in the 'recovery_disabled' case, the device will be removed.
This suitable for any kernel since 2.6.29 which is when
recovery_disabled was introduced.
Reported-by: Sebastian Färber <faerber@xxxxxxxxx>
Signed-off-by: NeilBrown <neilb@xxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxx>
---
drivers/md/raid1.c | 1 +
1 file changed, 1 insertion(+)
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -1188,6 +1188,7 @@ static int raid1_remove_disk(mddev_t *md
* is not possible.
*/
if (!test_bit(Faulty, &rdev->flags) &&
+ !mddev->recovery_disabled &&
mddev->degraded < conf->raid_disks) {
err = -EBUSY;
goto abort;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/