Re: [PATCH v3 1/1] block/blk-mq: use atomic_t for quiesce_depth to avoid lock contention on RT

From: Ionut Nechita (Wind River)

Date: Thu Feb 12 2026 - 02:15:07 EST


Hi Hillf,

Thank you for the review.

On Thu, 12 Feb 2026 07:59:17 +0800, Hillf Danton wrote:
> Nope mb is not enough.
>
> int qd = atomic_read(&q->quiesce_depth);
> for (;;) {
> int v = qd + 1;
> if (atomic_try_cmpxchg(&q->quiesce_depth, &qd, v))
> break;
> }

atomic_inc() is already unconditionally atomic and does exactly the same
as the cmpxchg loop above - it atomically increments the value by one.
The smp_mb__after_atomic() following it provides the necessary memory
ordering to ensure the store is visible before subsequent loads in
blk_mq_run_hw_queue(). Could you clarify what specific scenario you
see where this would be insufficient?

> More important however, why not send this patch to the RT tree instead,
> given the spin lock is good and correct in mainline. By good I mean
> fixing RT stuff in the mainline is not encouraged.

PREEMPT_RT has been merged into mainline since v6.12. There is no
separate RT tree for this kind of fix anymore. The spinlock in question
converts to rt_mutex under PREEMPT_RT, which is now a mainline
configuration option, so the contention issue is a mainline problem.

Beyond the RT aspect, replacing the spinlock with atomic_t in the hot
path is arguably a simplification that benefits all configurations -
it removes locking overhead and eliminates the two-variable
synchronization (quiesce_depth + QUEUE_FLAG_QUIESCED) in favor of a
single atomic counter.

Best regards,
Ionut