On 2023-09-25 04:59, Kelly Devilliv wrote:Thanks Robin.
Dear all,to test pcie peer to peer communication between the two gpu cards, but the
I am working on an ARM-V8 server with two gpu cards on it. Recently, I need
throughput is only 4GB/s.
After I explored the gpu's kernel mode driver, I found it was using thedma_map_resource() API to map the peer device's MMIO space. The arm
iommu driver then will hardcode a 'IOMMU_MMIO' prot in the later dma map:
static dma_addr_t iommu_dma_map_resource(struct device *dev,phys_addr_t phys,
size_t size, enum dma_data_directiondir, unsigned long attrs)
{attrs) | IOMMU_MMIO,
return __iommu_dma_map(dev, phys, size,
dma_info_to_prot(dir, false,
dma_get_mask(dev));which may have a negative impact on the performance of the pcie peer to peer
}
And that will finally set the 'ARM_LPAE_PTE_MEMATTR_DEV' attribute in PTE,
transactions.
/*and re-compile the linux kernel, the throughput then can be up to 28GB/s.
* Note that this logic is structured to accommodate Mali LPAE
* having stage-1-like attributes but stage-2-like permissions.
*/
if (data->iop.fmt == ARM_64_LPAE_S2 ||
data->iop.fmt == ARM_32_LPAE_S2) {
if (prot & IOMMU_MMIO)
pte |= ARM_LPAE_PTE_MEMATTR_DEV;
else if (prot & IOMMU_CACHE)
pte |= ARM_LPAE_PTE_MEMATTR_OIWB;
else
pte |= ARM_LPAE_PTE_MEMATTR_NC;
} else {
if (prot & IOMMU_MMIO)
pte |= (ARM_LPAE_MAIR_ATTR_IDX_DEV
<< ARM_LPAE_PTE_ATTRINDX_SHIFT);
else if (prot & IOMMU_CACHE)
pte |= (ARM_LPAE_MAIR_ATTR_IDX_CACHE
<< ARM_LPAE_PTE_ATTRINDX_SHIFT);
}
I tried to remove the 'IOMMU_MMIO' prot in the dma_map_resource() API
Is there an elegant way to solve this issue without modifying the linux kernel?e.g., a substitution of dma_map_resource() API?
Not really. Other use-cases for dma_map_resource() include DMA offload
engines accessing FIFO registers, where allowing reordering, write-gathering,
etc. would be a terrible idea. Thus it needs to assume a "safe" MMIO memory
type, which on Arm means Device-nGnRE.
However, the "proper" PCI peer-to-peer support under CONFIG_PCI_P2PDMA
ended up moving away from the dma_map_resource() approach anyway, and
allows this kind of device memory to be treated more like regular memory (via
ZONE_DEVICE) rather than arbitrary MMIO resources, so your best bet would
be to get the GPU driver converted over to using that.
So your suggestion is we'd better work out a new implementation just as what it
does under CONFIG_PCI_P2PDMA instead of just using the dma_map_resource()
API?
I have explored the GPU drivers from AMD, Nvidia and habanalabs, e.g., and found
they all using the dma_map_resource() API to map the peer device's bar address.
If so, is it possible to be a common performance issue in PCI peer-to-peer scenario?
Thanks,
Robin.
Thank you!
Platform info:
Linux kernel version: 5.10
PCIE GEN4 x16
Sincerely,
Kelly