Re: [bug report] iommu_dma_unmap_sg() is very slow then running IO from remote numa node
From: John Garry
Date: Tue Aug 10 2021 - 05:37:27 EST
On 28/07/2021 16:17, Ming Lei wrote:
Have you tried turning off the IOMMU to ensure that this is really just
an IOMMU problem?
You can try setting CONFIG_ARM_SMMU_V3=n in the defconfig or passing
cmdline param iommu.passthrough=1 to bypass the the SMMU (equivalent to
disabling for kernel drivers).
Bypassing SMMU via iommu.passthrough=1 basically doesn't make a difference
on this issue.
A ~90% throughput drop still seems to me to be too high to be a software
issue. More so since I don't see similar on my system. And that throughput
drop does not lead to a total CPU usage drop, from the fio log.
Do you know if anyone has run memory benchmark tests on this board to find
out NUMA effect? I think lmbench or stream could be used for this.
https://lore.kernel.org/lkml/YOhbc5C47IzC893B@T590/
Hi Ming,
Out of curiosity, did you investigate this topic any further?
And you also asked about my results earlier:
On 22/07/2021 16:54, Ming Lei wrote:
>> [ 52.035895] nvme 0000:81:00.0: Adding to iommu group 5
>> [ 52.047732] nvme nvme0: pci function 0000:81:00.0
>> [ 52.067216] nvme nvme0: 22/0/2 default/read/poll queues
>> [ 52.087318] nvme0n1: p1
>>
>> So I get these results:
>> cpu0 335K
>> cpu32 346K
>> cpu64 300K
>> cpu96 300K
>>
>> So still not massive changes.
> In your last email, the results are the following with irq mode io_uring:
>
> cpu0 497K
> cpu4 307K
> cpu32 566K
> cpu64 488K
> cpu96 508K
>
> So looks you get much worse result with real io_polling?
>
Would the expectation be that at least I get the same performance with
io_polling here? Anything else to try which you can suggest to
investigate this lower performance?
Thanks,
John