Re: [PATCH RFC 1/1] genirq: Make threaded handler use irq affinity for managed interrupt

From: John Garry
Date: Fri Jan 03 2020 - 06:50:56 EST


On 03/01/2020 11:29, Ming Lei wrote:
On Fri, Jan 03, 2020 at 10:41:48AM +0000, John Garry wrote:
On 03/01/2020 00:46, Ming Lei wrote:
d the
DMA API more than an architecture-specific problem.

Given that we have so far very little data, I'd hold off any conclusion.
We can start to collect latency data of dma unmapping vs nvme_irq()
on both x86 and arm64.

I will see if I can get a such box for collecting the latency data.
To reiterate what I mentioned before about IOMMU DMA unmap on x86, a key
difference is that by default it uses the non-strict (lazy) mode unmap, i.e.
we unmap in batches. ARM64 uses general default, which is strict mode, i.e.
every unmap results in an IOTLB fluch.

In my setup, if I switch to lazy unmap (set iommu.strict=0 on cmdline), then
no lockup.

Are any special IOMMU setups being used for x86, like enabling strict mode?
I don't know...
BTW, I have run the test on one 224-core ARM64 with one 32-hw_queue NVMe, the
softlock issue can be triggered in one minute.

nvme_irq() often takes ~5us to complete on this machine, then there is really
risk of cpu lockup when IOPS is > 200K.

Do you have a typical nvme_irq() completion time for a mid-range x86 server?

~1us.

Eh, so ~ x5 faster on x86 machine?! Seems some real issue here.


It is done via bcc script, and ebpf itself may introduce some overhead.


Can you share the script/instructions? I would like to test on my machine. I assume you tested on an ThunderX2.



The soft lockup can be triggered too if 'iommu.strict=0' is passed in,
just takes a bit longer by starting more IO jobs.

In above test, I submit IO to one single NVMe drive from 4 CPU cores via 8 or
12 jobs(iommu.strict=0), meantime make the nvme interrupt handled just in one
dedicated CPU core.

Well a problem with so many CPUs is that it does not scale (well) with MQ
devices, like NVMe.

As CPU count goes up, device queue count doesn't and we get more contention.

The problem is worse on ARM64 system, in which there are more CPU cores,
and each single CPU core is often slower than x86's. Meantime each
hardware interrupt has to be handled on single CPU target.

Agreed


Also the storage device(such as NVMe) itself should be same for both
from performance viewpoint.



Is there lock contention among iommu dma map and unmap callback?

There would be the IOVA management, but that would be common to x86. Each
CPU keeps an IOVA cache, and there is a central pool of cached IOVAs, so
that reduces any contention, unless the caches are exhausted.

I think most contention/bottleneck is at the SMMU HW interface, which has a
single queue interface.

Not sure if it is related with single queue interface, given my test just
uses single hw queue by pushing several CPU cores to submit IO and
handling the single queue's interrupt on one dedicated CPU core.

ok, but in my testing I was not limiting to a group of CPUs mapped to a single queue, and in this case I saw heavy SMMU driver loading [0].

thanks,
John

[0] https://lore.kernel.org/linux-iommu/20190821151749.23743-1-will@xxxxxxxxxx/T/#m4f20e9237797944e63f566ae9e02507794f25fb1