On Tue, Jan 11, 2022 at 10:02:16PM +0800, Laibin Qiu wrote:Thanks, i will modify it in V5.
In case of shared tags, there might be more than one hctx which
allocates from the same tags, and each hctx is limited to allocate at
most:
hctx_max_depth = max((bt->sb.depth + users - 1) / users, 4U);
tag idle detection is lazy, and may be delayed for 30sec, so there
could be just one real active hctx(queue) but all others are actually
idle and still accounted as active because of the lazy idle detection.
Then if wake_batch is > hctx_max_depth, driver tag allocation may wait
forever on this real active hctx.
Fix this by recalculating wake_batch when inc or dec active_queues.
...
{
+ unsigned int users;
Missed blank line here.
I think because this judgement is in the general IO process, there are also some performance considerations here.
if (blk_mq_is_shared_tags(hctx->flags)) {
struct request_queue *q = hctx->queue;
+ if (test_bit(QUEUE_FLAG_HCTX_ACTIVE, &q->queue_flags) ||
+ test_and_set_bit(QUEUE_FLAG_HCTX_ACTIVE, &q->queue_flags)) {
Whoever wrote this code did too much defensive programming, because the first
conditional doesn't make much sense here. Am I right?
+ return true;
+ }
} else {
+ if (test_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state) ||
+ test_and_set_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state)) {
Ditto.
+ return true;
+ }
}
...
+ unsigned int wake_batch = clamp_t(unsigned int,
+ (sbq->sb.depth + users - 1) / users, 4U, SBQ_WAKE_BATCH);
unsigned int wake_batch;
wake_batch = clamp_val((sbq->sb.depth + users - 1) / users, 4, SBQ_WAKE_BATCH);
...
is easier to read, no?