When allocating from the reserved tags pool, bt_get() is called with
a NULL hctx. If all tags are in use, the hw queue is kicked to push
out any pending IO, potentially freeing tags, and tag allocation is
retried. The problem is that blk_mq_run_hw_queue() doesn't check for
a NULL hctx. This patch fixes that bug.
An alternative implementation might skip kicking the queue for reserved
tags and go right to io_schedule() but we chose to keep it simple.
Tested by hammering mtip32xx with concurrent smartctl/hdparm.
Signed-off-by: Sam Bradshaw <sbradshaw@xxxxxxxxxx>
Signed-off-by: Selvan Mani <smani@xxxxxxxxxx>
---
block/blk-mq.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 59fa239..0471af6 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -887,7 +887,7 @@ static int blk_mq_hctx_next_cpu(struct blk_mq_hw_ctx *hctx)
void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
{
- if (unlikely(test_bit(BLK_MQ_S_STOPPED, &hctx->state) ||
+ if (unlikely(!hctx || test_bit(BLK_MQ_S_STOPPED, &hctx->state) ||
!blk_mq_hw_queue_mapped(hctx)))
return;