Re: [PATCH v6 1/1] block/blk-mq: use atomic_t for quiesce_depth to avoid lock contention on RT

From: Sebastian Andrzej Siewior

Date: Wed May 06 2026 - 03:48:09 EST


On 2026-05-06 09:14:33 [+0200], Bart Van Assche wrote:
> On 5/6/26 8:56 AM, Ionut Nechita (Wind River) wrote:
> > void blk_mq_quiesce_queue_nowait(struct request_queue *q)
> > {
> > - unsigned long flags;
> > -
> > - spin_lock_irqsave(&q->queue_lock, flags);
> > - if (!q->quiesce_depth++)
> > - blk_queue_flag_set(QUEUE_FLAG_QUIESCED, q);
> > - spin_unlock_irqrestore(&q->queue_lock, flags);
> > + atomic_inc(&q->quiesce_depth);
> > + /*
> > + * Pairs with smp_rmb() in blk_mq_run_hw_queue(): make the
> > + * incremented quiesce_depth observable to readers re-checking
> > + * the quiesce state, so they don't dispatch on a quiesced queue.
> > + */
> > + smp_mb__after_atomic();
> > }
>
> No, this is not sufficient to guarantee that blk_mq_run_hw_queue() sees
> the latest value of q->quiesce_depth. If you want to achieve that I
> think the only option is to protect the atomic_inc() above with
> hctx->queue->queue_lock.
>
> > @@ -2362,17 +2365,15 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
> > need_run = blk_mq_hw_queue_need_run(hctx);
> > if (!need_run) {
> > - unsigned long flags;
> > -
> > /*
> > - * Synchronize with blk_mq_unquiesce_queue(), because we check
> > - * if hw queue is quiesced locklessly above, we need the use
> > - * ->queue_lock to make sure we see the up-to-date status to
> > - * not miss rerunning the hw queue.
> > + * Re-check the quiesce state after a read barrier. Pairs with
> > + * smp_mb__after_atomic() in blk_mq_quiesce_queue_nowait() and
> > + * blk_mq_unquiesce_queue() so we don't miss rerunning the hw
> > + * queue when a concurrent unquiesce has just dropped the
> > + * quiesce_depth to zero.
> > */
> > - spin_lock_irqsave(&hctx->queue->queue_lock, flags);
> > + smp_rmb();
> > need_run = blk_mq_hw_queue_need_run(hctx);
> > - spin_unlock_irqrestore(&hctx->queue->queue_lock, flags);
>
> If the atomic_inc() in blk_mq_quiesce_queue_nowait() is protected by
> hctx->queue->queue_lock then the above code doesn't have to be modified.

But wouldn't the atomic_inc + barrier avoid the need to have the lock?
Isn't this a normal pattern? If the lock is kept, we could use
non-atomic ops here then. But this avoids having the lock.

> Thanks,
>
> Bart.

Sebastian