Re: [PATCH RFC 1/1] genirq: Make threaded handler use irq affinity for managed interrupt
From: John Garry
Date: Fri Dec 20 2019 - 06:30:49 EST
So you enqueue requests from CPU0 only? It seems a bit odd...
No, but maybe I wasn't clear enough. I'll give an overview:
For D06 SAS controller - which is a multi-queue PCI device - we use
managed interrupts. The HW has 16 submission/completion queues, so for
96 cores, we have an even spread of 6 CPUs assigned per queue; and this
per-queue CPU mask is the interrupt affinity mask. So CPU0-5 would
submit any IO on queue0, CPU6-11 on queue2, and so on. PCI NVMe is
essentially the same.
These are the environments which we're trying to promote performance.
Then for D05 SAS controller - which is multi-queue platform device
(mbigen) - we don't use managed interrupts. We still submit IO from any
CPU, but we choose the queue to submit IO on a round-robin basis to
promote some isolation, i.e. reduce inter-queue lock contention, so the
queue chosen has nothing to do with the CPU.
And with your change we may submit on cpu4 but service the interrupt on
cpu30, as an example. While previously we would always service on cpu0.
The old way still isn't ideal, I'll admit.
For this env, we would just like to maintain the same performance. And
it's here that we see the performance drop.
Hi Marc,
We've got some more results and it looks promising.
So with your patch we get a performance boost of 3180.1K -> 3294.9K IOPS
in the D06 SAS env. Then when we change the driver to use threaded
interrupt handler (mainline currently uses tasklet), we get a boost
again up to 3415K IOPS.
Now this is essentially the same figure we had with using threaded
handler + the gen irq change in spreading the handler CPU affinity. We
did also test your patch + gen irq change and got a performance drop, to
3347K IOPS.
So tentatively I'd say your patch may be all we need.
FYI, here is how the effective affinity is looking for both SAS
controllers with your patch:
74:02.0
irq 81, cpu list 24-29, effective list 24 cq
irq 82, cpu list 30-35, effective list 30 cq
irq 83, cpu list 36-41, effective list 36 cq
irq 84, cpu list 42-47, effective list 42 cq
irq 85, cpu list 48-53, effective list 48 cq
irq 86, cpu list 54-59, effective list 56 cq
irq 87, cpu list 60-65, effective list 60 cq
irq 88, cpu list 66-71, effective list 66 cq
irq 89, cpu list 72-77, effective list 72 cq
irq 90, cpu list 78-83, effective list 78 cq
irq 91, cpu list 84-89, effective list 84 cq
irq 92, cpu list 90-95, effective list 90 cq
irq 93, cpu list 0-5, effective list 0 cq
irq 94, cpu list 6-11, effective list 6 cq
irq 95, cpu list 12-17, effective list 12 cq
irq 96, cpu list 18-23, effective list 18 cq
74:04.0
irq 113, cpu list 24-29, effective list 25 cq
irq 114, cpu list 30-35, effective list 31 cq
irq 115, cpu list 36-41, effective list 37 cq
irq 116, cpu list 42-47, effective list 43 cq
irq 117, cpu list 48-53, effective list 49 cq
irq 118, cpu list 54-59, effective list 57 cq
irq 119, cpu list 60-65, effective list 61 cq
irq 120, cpu list 66-71, effective list 67 cq
irq 121, cpu list 72-77, effective list 73 cq
irq 122, cpu list 78-83, effective list 79 cq
irq 123, cpu list 84-89, effective list 85 cq
irq 124, cpu list 90-95, effective list 91 cq
irq 125, cpu list 0-5, effective list 1 cq
irq 126, cpu list 6-11, effective list 7 cq
irq 127, cpu list 12-17, effective list 17 cq
irq 128, cpu list 18-23, effective list 19 cq
As for your patch itself, I'm still concerned of possible regressions if
we don't apply this effective interrupt affinity spread policy to only
managed interrupts.
JFYI, about NVMe CPU lockup issue, there are 2 works on going here:
https://lore.kernel.org/linux-nvme/20191209175622.1964-1-kbusch@xxxxxxxxxx/T/#t
https://lore.kernel.org/linux-block/20191218071942.22336-1-ming.lei@xxxxxxxxxx/T/#t
Cheers,
John
Ps. Thanks to Xiang Chen for all the work here in getting these results.
Please give this new patch a shot on your system (my D05 doesn't have
any managed devices):
We could consider supporting platform msi managed interrupts, but I