Re: [bug report] iommu_dma_unmap_sg() is very slow then running IO from remote numa node

From: John Garry
Date: Wed Jul 28 2021 - 06:38:50 EST


On 28/07/2021 02:32, Ming Lei wrote:
On Mon, Jul 26, 2021 at 3:51 PM John Garry<john.garry@xxxxxxxxxx> wrote:
On 23/07/2021 11:21, Ming Lei wrote:
Thanks, I was also going to suggest the latter, since it's what
arm_smmu_cmdq_issue_cmdlist() does with IRQs masked that should be most
indicative of where the slowness most likely stems from.
The improvement from 'iommu.strict=0' is very small:

Have you tried turning off the IOMMU to ensure that this is really just
an IOMMU problem?

You can try setting CONFIG_ARM_SMMU_V3=n in the defconfig or passing
cmdline param iommu.passthrough=1 to bypass the the SMMU (equivalent to
disabling for kernel drivers).
Bypassing SMMU via iommu.passthrough=1 basically doesn't make a difference
on this issue.

A ~90% throughput drop still seems to me to be too high to be a software issue. More so since I don't see similar on my system. And that throughput drop does not lead to a total CPU usage drop, from the fio log.

Do you know if anyone has run memory benchmark tests on this board to find out NUMA effect? I think lmbench or stream could be used for this.

Testing network performance in an equivalent fashion to storage could also be an idea.

Thanks,
John


And from fio log, submission latency is good, but completion latency
is pretty bad,
and maybe it is something that writing to PCI memory isn't committed to HW in
time?