The idle way to disable tag preemption is to track how many tags are
available, and wait directly in blk_mq_get_tag() if free tags are
very little. However, this is out of reality because fast path is
affected.
As 'ws_active' is only updated in slow path, this patch disable tag
preemption if 'ws_active' is greater than 8, which means there are many
threads waiting for tags already.
Once tag preemption is disabled, there is a situation that can cause
performance degration(or io hung in extreme scenarios): the waitqueue
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 2615bd58bad3..b49b20e11350 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -156,6 +156,7 @@ struct blk_mq_alloc_data {
/* allocate multiple requests/tags in one go */
unsigned int nr_tags;
+ bool preemption;
struct request **cached_rq;