Re: [PATCH] null_blk: don't enable irqs when in irq

From: Jens Axboe
Date: Mon Dec 28 2015 - 13:53:33 EST


On 12/25/2015 09:03 PM, Jens Axboe wrote:
On 12/25/2015 07:26 AM, Rabin Vincent wrote:
When using irq_mode=NULL_IRQ_TIMER, blk_start_queue() is called from the
hrtimer interrupt. null_request_fn() calls spin_unlock_irq() and this
causes the following splat from lockdep since interrupts are being
enabled while we're in an interrupt handler.

When we're in null_request_fn() we can't know the status of the flags
saved before blk_start_queue() was called, but we can use in_irq() to
conditionally enable interrupts only if we're not in a hard interrupt in
order to handle this case.

Using in_irq() to check for this is... nasty. You have two choices here.
Either you don't enable interrupts ever. That's safe from the
perspective of the driver, since we don't block in handling the command.
That means just unconditionally using spin_unlock() in the request_fn.
Or you punt queue running to process context, by manually clearing the
queue stopped flag and using blk_run_queue_async() instead.

Something like this should work, can you test it?


--
Jens Axboe

diff --git a/block/blk-core.c b/block/blk-core.c
index c487b94c59e3..33e2f62d5062 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -207,6 +207,22 @@ void blk_delay_queue(struct request_queue *q, unsigned long msecs)
EXPORT_SYMBOL(blk_delay_queue);

/**
+ * blk_start_queue_async - asynchronously restart a previously stopped queue
+ * @q: The &struct request_queue in question
+ *
+ * Description:
+ * blk_start_queue_async() will clear the stop flag on the queue, and
+ * ensure that the request_fn for the queue is run from an async
+ * context.
+ **/
+void blk_start_queue_async(struct request_queue *q)
+{
+ queue_flag_clear(QUEUE_FLAG_STOPPED, q);
+ blk_run_queue_async(q);
+}
+EXPORT_SYMBOL(blk_start_queue_async);
+
+/**
* blk_start_queue - restart a previously stopped queue
* @q: The &struct request_queue in question
*
diff --git a/drivers/block/null_blk.c b/drivers/block/null_blk.c
index a428e4ef71fd..69064e1f70b0 100644
--- a/drivers/block/null_blk.c
+++ b/drivers/block/null_blk.c
@@ -232,20 +232,14 @@ static void end_cmd(struct nullb_cmd *cmd)
break;
case NULL_Q_BIO:
bio_endio(cmd->bio);
- goto free_cmd;
+ break;
}

- /* Restart queue if needed, as we are freeing a tag */
- if (q && !q->mq_ops && blk_queue_stopped(q)) {
- unsigned long flags;
-
- spin_lock_irqsave(q->queue_lock, flags);
- if (blk_queue_stopped(q))
- blk_start_queue(q);
- spin_unlock_irqrestore(q->queue_lock, flags);
- }
-free_cmd:
free_cmd(cmd);
+
+ /* Restart queue if needed, as we are freeing a tag */
+ if (queue_mode == NULL_Q_RQ && blk_queue_stopped(q))
+ blk_start_queue_async(q);
}

static enum hrtimer_restart null_cmd_timer_expired(struct hrtimer *timer)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 0169ba2e2e64..c70e3588a48c 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -797,6 +797,7 @@ extern int sg_scsi_ioctl(struct request_queue *, struct gendisk *, fmode_t,
extern int blk_queue_enter(struct request_queue *q, gfp_t gfp);
extern void blk_queue_exit(struct request_queue *q);
extern void blk_start_queue(struct request_queue *q);
+extern void blk_start_queue_async(struct request_queue *q);
extern void blk_stop_queue(struct request_queue *q);
extern void blk_sync_queue(struct request_queue *q);
extern void __blk_stop_queue(struct request_queue *q);