On Wed, Sep 6, 2023 at 11:30 PM Yu Kuai <yukuai1@xxxxxxxxxxxxxxx> wrote:
Hi,
在 2023/09/06 17:37, Li Nan 写道:
Spare device affects array stack limits is unreasonable. For example,
create a raid1 with two 512 byte devices, the logical_block_size of array
will be 512. But after add a 4k devcie as spare, logical_block_size of
array will change as follows.
mdadm -C /dev/md0 -n 2 -l 10 /dev/sd[ab] //sd[ab] is 512
//logical_block_size of md0: 512
mdadm --add /dev/md0 /dev/sdc //sdc is 4k
//logical_block_size of md0: 512
mdadm -S /dev/md0
mdadm -A /dev/md0 /dev/sd[ab]
//logical_block_size of md0: 4k
This will confuse users, as nothing has been changed, why did the
logical_block_size of array change?
Now, only update logical_block_size of array with the device in use.
Signed-off-by: Li Nan <linan122@xxxxxxxxxx>
---
drivers/md/raid1.c | 19 ++++++++-----------
1 file changed, 8 insertions(+), 11 deletions(-)
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 95504612b7e2..d75c5dd89e86 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -3140,19 +3140,16 @@ static int raid1_run(struct mddev *mddev)
I'm not sure about this behaviour, 'logical_block_size' can be
increased while adding new underlying disk, the key point is not when
to increase 'logical_block_size'. If there is a mounted fs, or
partition in the array, I think the array will be corrupted.
How common is such fs/partition corruption? I think some fs and partition
table can work properly with 512=>4096 change?
Thanks,
Song
.
Perhaps once that array is started, logical_block_size should not be
changed anymore, this will require 'logical_block_size' to be metadate
inside raid superblock. And the array should deny any new disk with
bigger logical_block_size.
Thanks,
Kuai
if (mddev->queue)
blk_queue_max_write_zeroes_sectors(mddev->queue, 0);
- rdev_for_each(rdev, mddev) {
- if (!mddev->gendisk)
- continue;
- disk_stack_limits(mddev->gendisk, rdev->bdev,
- rdev->data_offset << 9);
- }
-
mddev->degraded = 0;
- for (i = 0; i < conf->raid_disks; i++)
- if (conf->mirrors[i].rdev == NULL ||
- !test_bit(In_sync, &conf->mirrors[i].rdev->flags) ||
- test_bit(Faulty, &conf->mirrors[i].rdev->flags))
+ for (i = 0; i < conf->raid_disks; i++) {
+ rdev = conf->mirrors[i].rdev;
+ if (rdev && mddev->gendisk)
+ disk_stack_limits(mddev->gendisk, rdev->bdev,
+ rdev->data_offset << 9);
+ if (!rdev || !test_bit(In_sync, &rdev->flags) ||
+ test_bit(Faulty, &rdev->flags))
mddev->degraded++;
+ }
/*
* RAID1 needs at least one disk in active
*/