When isolcpus=managed_irq is enabled all hardware queues should run onReviewed-by: Hannes Reinecke <hare@xxxxxxx>
the housekeeping CPUs only. Thus ignore the affinity mask provided by
the driver. Also we can't use blk_mq_map_queues because it will map all
CPUs to first hctx unless, the CPU is the same as the hctx has the
affinity set to, e.g. 8 CPUs with isolcpus=managed_irq,2-3,6-7 config
queue mapping for /dev/nvme0n1
hctx0: default 2 3 4 6 7
hctx1: default 5
hctx2: default 0
hctx3: default 1
PCI name is 00:05.0: nvme0n1
irq 57 affinity 0-1 effective 1 is_managed:0 nvme0q0
irq 58 affinity 4 effective 4 is_managed:1 nvme0q1
irq 59 affinity 5 effective 5 is_managed:1 nvme0q2
irq 60 affinity 0 effective 0 is_managed:1 nvme0q3
irq 61 affinity 1 effective 1 is_managed:1 nvme0q4
where as with blk_mq_hk_map_queues we get
queue mapping for /dev/nvme0n1
hctx0: default 2 4
hctx1: default 3 5
hctx2: default 0 6
hctx3: default 1 7
PCI name is 00:05.0: nvme0n1
irq 56 affinity 0-1 effective 1 is_managed:0 nvme0q0
irq 61 affinity 4 effective 4 is_managed:1 nvme0q1
irq 62 affinity 5 effective 5 is_managed:1 nvme0q2
irq 63 affinity 0 effective 0 is_managed:1 nvme0q3
irq 64 affinity 1 effective 1 is_managed:1 nvme0q4
Reviewed-by: Christoph Hellwig <hch@xxxxxx>
Signed-off-by: Daniel Wagner <wagi@xxxxxxxxxx>
---
block/blk-mq-cpumap.c | 65 +++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 65 insertions(+)