Re: [PATCH -next 1/6] Revert "md: unlock mddev before reap sync_thread in action_store"

From: Yu Kuai
Date: Fri May 05 2023 - 05:05:15 EST


Hi, Song and Guoqing

在 2023/04/06 16:53, Yu Kuai 写道:
Hi,

在 2023/03/29 7:58, Song Liu 写道:
On Wed, Mar 22, 2023 at 11:32 PM Yu Kuai <yukuai1@xxxxxxxxxxxxxxx> wrote:

Hi,

在 2023/03/23 11:50, Guoqing Jiang 写道:

Combined your debug patch with above steps. Seems you are

1. add delay to action_store, so it can't get lock in time.
2. echo "want_replacement"**triggers md_check_recovery which can grab lock
      to start sync thread.
3. action_store finally hold lock to clear RECOVERY_RUNNING in reap sync
thread.
4. Then the new added BUG_ON is invoked since RECOVERY_RUNNING is cleared
      in step 3.

Yes, this is exactly what I did.

sync_thread can be interrupted once MD_RECOVERY_INTR is set which means
the RUNNING
can be cleared, so I am not sure the added BUG_ON is reasonable. And
change BUG_ON

I think BUG_ON() is reasonable because only md_reap_sync_thread can
clear it, md_do_sync will exit quictly if MD_RECOVERY_INTR is set, but
md_do_sync should not see that MD_RECOVERY_RUNNING is cleared, otherwise
there is no gurantee that only one sync_thread can be in progress.

like this makes more sense to me.

+BUG_ON(!test_bit(MD_RECOVERY_RUNNING, &mddev->recovery) &&
+!test_bit(MD_RECOVERY_INTR, &mddev->recovery));

I think this can be reporduced likewise, md_check_recovery clear
MD_RECOVERY_INTR, and new sync_thread triggered by echo
"want_replacement" won't set this bit.


I think there might be racy window like you described but it should be
really small, I prefer
to just add a few lines like this instead of revert and introduce new
lock to resolve the same
issue (if it is).

The new lock that I add in this patchset is just try to synchronize idle
and forzen from action_store(patch 3), I can drop it if you think this
is not necessary.

The main changes is patch 4, new lines is not much and I really don't
like to add new flags unless we have to, current code is already hard
to understand...

By the way, I'm concerned that drop the mutex to unregister sync_thread
might not be safe, since the mutex protects lots of stuff, and there
might exist other implicit dependencies.


TBH, I am reluctant to see the changes in the series, it can only be
considered
acceptable with conditions:

1. the previous raid456 bug can be fixed in this way too, hopefully Marc
or others
      can verify it.

After reading the thread:

https://lore.kernel.org/linux-raid/5ed54ffc-ce82-bf66-4eff-390cb23bc1ac@xxxxxxxxxxxxx/T/#t

The deadlock in raid456 has same conditions as raid10:
1) echo idle hold mutex to stop sync thread;
2) sync thread wait for io to complete;
3) io can't be handled by daemon thread because sb flag is set;
4) sb flag can't be cleared because daemon thread can't hold mutex;

I tried to reporduce the deadlock with the reporducer provided in the
thread, howerver, the deadlock is not reporduced after running for more
than a day.

I changed the reporducer to below:

[root@fedora raid5]# cat test_deadlock.sh
#! /bin/bash

(
while true; do
echo check > /sys/block/md0/md/sync_action
sleep 0.5
echo idle > /sys/block/md0/md/sync_action
done
) &

echo 0 > /proc/sys/vm/dirty_background_ratio
(
while true; do
fio -filename=/dev/md0 -bs=4k -rw=write -numjobs=1 -name=xxx
done
) &

And I finially able to reporduce the deadlock with this patch
reverted(running for about an hour):

[root@fedora raid5]# ps -elf | grep " D " | grep -v grep
1 D root 156 2 16 80 0 - 0 md_wri 06:51 ? 00:19:15 [kworker/u8:11+flush-9:0]
5 D root 2239 1 2 80 0 - 992 kthrea 06:57 pts/0 00:02:15 sh test_deadlock.sh
1 D root 42791 2 0 80 0 - 0 raid5_ 07:45 ? 00:00:00 [md0_resync]
5 D root 42803 42797 0 80 0 - 92175 balanc 07:45 ? 00:00:06 fio -filename=/dev/md0 -bs=4k -rw=write -numjobs=1 -name=xxx

[root@fedora raid5]# cat /proc/2239/stack
[<0>] kthread_stop+0x96/0x2b0
[<0>] md_unregister_thread+0x5e/0xd0
[<0>] md_reap_sync_thread+0x27/0x370
[<0>] action_store+0x1fa/0x490
[<0>] md_attr_store+0xa7/0x120
[<0>] sysfs_kf_write+0x3a/0x60
[<0>] kernfs_fop_write_iter+0x144/0x2b0
[<0>] new_sync_write+0x140/0x210
[<0>] vfs_write+0x21a/0x350
[<0>] ksys_write+0x77/0x150
[<0>] __x64_sys_write+0x1d/0x30
[<0>] do_syscall_64+0x45/0x70
[<0>] entry_SYSCALL_64_after_hwframe+0x61/0xc6
[root@fedora raid5]# cat /proc/42791/stack
[<0>] raid5_get_active_stripe+0x606/0x960
[<0>] raid5_sync_request+0x508/0x570
[<0>] md_do_sync.cold+0xaa6/0xee7
[<0>] md_thread+0x266/0x280
[<0>] kthread+0x151/0x1b0
[<0>] ret_from_fork+0x1f/0x30

And with this patchset applied, I run the above reporducer for more than
a day now, and I think the deadlock in raid456 can be fixed.

Can this patchset be considered in next merge window? If so, I'll rebase
this patchset.

Thanks,
Kuai
2. pass all the tests in mdadm

AFAICT, this set looks like a better solution for this problem. But I agree
that we need to make sure it fixes the original bug. mdadm tests are not
in a very good shape at the moment. I will spend more time to look into
these tests.

While I'm working on another thread to protect md_thread with rcu, I
found that this patch has other defects that can cause null-ptr-
deference in theory where md_unregister_thread(&mddev->sync_thread) can
concurrent with other context to access sync_thread, for example:

t1: md_set_readonly             t2: action_store
                                md_unregister_thread
                                // 'reconfig_mutex' is not held
// 'reconfig_mutex' is held by caller
if (mddev->sync_thread)
                                 thread = *threadp
                                 *threadp = NULL
 wake_up_process(mddev->sync_thread->tsk)
 // null-ptr-deference

So, I think this revert will make more sence. 😉

Thanks,
Kuai

.