Re: [PATCH] scsi: core: Run queue first after running device.

From: John Garry
Date: Fri Aug 06 2021 - 05:01:26 EST


On 06/08/2021 09:58, John Garry wrote:

And the patch subject is ambiguous

On 05/08/2021 15:32, lijinlin3@xxxxxxxxxx wrote:
From: Li Jinlin<lijinlin3@xxxxxxxxxx>

We found a hang issue, the test steps are as follows:
   1. echo "blocked" >/sys/block/sda/device/state
   2. dd if=/dev/sda of=/mnt/t.log bs=1M count=10
   3. echo none > /sys/block/sda/queue/scheduler
   4. echo "running" >/sys/block/sda/device/state

Step3 and Step4 should finish this work after Step4, but them hangs.

   CPU#0               CPU#1                CPU#2
   ---------------     ----------------     ----------------
                                            Step1: blocking device

                                            Step2: dd xxxx
                                                   ^^^^^^ get request
q_usage_counter++

                       Step3: switching scheculer
                       elv_iosched_store
                         elevator_switch
                           blk_mq_freeze_queue
                             blk_freeze_queue
                               > blk_freeze_queue_start
                                 ^^^^^^ mq_freeze_depth++

                               > blk_mq_run_hw_queues
                                 ^^^^^^ can't run queue when dev blocked

                               > blk_mq_freeze_queue_wait
                                 ^^^^^^ Hang here!!!
                                        wait q_usage_counter==0

   Step4: running device
   store_state_field
     scsi_rescan_device
       scsi_attach_vpd
         scsi_vpd_inquiry
           __scsi_execute
             blk_get_request
               blk_mq_alloc_request
                 blk_queue_enter
                 ^^^^^^ Hang here!!!
                        wait mq_freeze_depth==0

     blk_mq_run_hw_queues
     ^^^^^^ dispatch IO, q_usage_counter will reduce to zero

                             blk_mq_unfreeze_queue
                             ^^^^^ mq_freeze_depth--

Step3 and Step4 wait for each other, caused hangs.

This requires run queue frist to fix this issue when the device state

frist ?

changes to SDEV_RUNNING.

Fixes: f0f82e2476f6 ("scsi: core: Fix capacity set to zero after offlinining device")
Signed-off-by: Li Jinlin<lijinlin3@xxxxxxxxxx>
Signed-off-by: Qiu Laibin<qiulaibin@xxxxxxxxxx>
Signed-off-by: Wu Bo<wubo40@xxxxxxxxxx>

what kind of SoB is this?

---
  drivers/scsi/scsi_sysfs.c | 6 +++---
  1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
index c3a710bceba0..aa701582c950 100644
--- a/drivers/scsi/scsi_sysfs.c
+++ b/drivers/scsi/scsi_sysfs.c
@@ -809,12 +809,12 @@ store_state_field(struct device *dev, struct device_attribute *attr,
      ret = scsi_device_set_state(sdev, state);
      /*
       * If the device state changes to SDEV_RUNNING, we need to
-     * rescan the device to revalidate it, and run the queue to
-     * avoid I/O hang.
+     * run the queue to avoid I/O hang, and rescan the device
+     * to revalidate it.

A bit more description of the IO hang would be useful

       */
      if (ret == 0 && state == SDEV_RUNNING) {
-        scsi_rescan_device(dev);
          blk_mq_run_hw_queues(sdev->request_queue, true);
+        scsi_rescan_device(dev);

This would not have happened if scsi_rescan_device() was ran outside the mutex lock region, like I suggested originally.

Indeed, I doubt blk_mq_run_hw_queues() needs to be run with the sdev state_mutex held either.

      }
      mutex_unlock(&sdev->state_mutex);
--

.