Re: [PATCH v5 07/17] dma-mapping: Implement link/unlink ranges API

From: Leon Romanovsky
Date: Wed Jan 15 2025 - 03:34:19 EST


On Tue, Jan 14, 2025 at 08:50:35PM +0000, Robin Murphy wrote:
> On 17/12/2024 1:00 pm, Leon Romanovsky wrote:
> > From: Leon Romanovsky <leonro@xxxxxxxxxx>
> >
> > Introduce new DMA APIs to perform DMA linkage of buffers
> > in layers higher than DMA.
> >
> > In proposed API, the callers will perform the following steps.
> > In map path:
> > if (dma_can_use_iova(...))
> > dma_iova_alloc()
> > for (page in range)
> > dma_iova_link_next(...)
> > dma_iova_sync(...)
> > else
> > /* Fallback to legacy map pages */
> > for (all pages)
> > dma_map_page(...)
> >
> > In unmap path:
> > if (dma_can_use_iova(...))
> > dma_iova_destroy()
> > else
> > for (all pages)
> > dma_unmap_page(...)
> >
> > Reviewed-by: Christoph Hellwig <hch@xxxxxx>
> > Signed-off-by: Leon Romanovsky <leonro@xxxxxxxxxx>
> > ---
> > drivers/iommu/dma-iommu.c | 259 ++++++++++++++++++++++++++++++++++++
> > include/linux/dma-mapping.h | 32 +++++
> > 2 files changed, 291 insertions(+)

<...>

> > +static void iommu_dma_iova_unlink_range_slow(struct device *dev,
> > + dma_addr_t addr, size_t size, enum dma_data_direction dir,
> > + unsigned long attrs)
> > +{
> > + struct iommu_domain *domain = iommu_get_dma_domain(dev);
> > + struct iommu_dma_cookie *cookie = domain->iova_cookie;
> > + struct iova_domain *iovad = &cookie->iovad;
> > + size_t iova_start_pad = iova_offset(iovad, addr);
> > + dma_addr_t end = addr + size;
> > +
> > + do {
> > + phys_addr_t phys;
> > + size_t len;
> > +
> > + phys = iommu_iova_to_phys(domain, addr);
> > + if (WARN_ON(!phys))
> > + continue;
>
> Infinite WARN_ON loop, nice.

No problem, will change it to WARN_ON_ONCE.

>
> > + len = min_t(size_t,
> > + end - addr, iovad->granule - iova_start_pad);

<...>

> > +
> > + swiotlb_tbl_unmap_single(dev, phys, len, dir, attrs);
>
> This is still dumb. For everything other than the first and last granule,
> either it's definitely not in SWIOTLB, or it is (per the unaligned size
> thing above) but then "len" is definitely wrong and SWIOTLB will complain.

Like Christoph said, we tested it with NVMe which uses SWIOTLB path and
despite having a lot of unaligned sizes, it worked without SWIOTLB
complains.

>
> > +
> > + addr += len;
> > + iova_start_pad = 0;
> > + } while (addr < end);
> > +}
> > +
> > +static void __iommu_dma_iova_unlink(struct device *dev,
> > + struct dma_iova_state *state, size_t offset, size_t size,
> > + enum dma_data_direction dir, unsigned long attrs,
> > + bool free_iova)
> > +{
> > + struct iommu_domain *domain = iommu_get_dma_domain(dev);
> > + struct iommu_dma_cookie *cookie = domain->iova_cookie;
> > + struct iova_domain *iovad = &cookie->iovad;
> > + dma_addr_t addr = state->addr + offset;
> > + size_t iova_start_pad = iova_offset(iovad, addr);
> > + struct iommu_iotlb_gather iotlb_gather;
> > + size_t unmapped;
> > +
> > + if ((state->__size & DMA_IOVA_USE_SWIOTLB) ||
> > + (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)))
> > + iommu_dma_iova_unlink_range_slow(dev, addr, size, dir, attrs);
> > +
> > + iommu_iotlb_gather_init(&iotlb_gather);
> > + iotlb_gather.queued = free_iova && READ_ONCE(cookie->fq_domain);
>
> This makes things needlessly hard to follow, just keep the IOVA freeing
> separate. And by that I really mean just have unlink and free, since
> dma_iova_destroy() really doesn't seem worth the extra complexity to save
> one line in one caller...

In initial versions, I didn't implement dma_iova_destroy() and used
unlink->free calls directly. Both Jason and Christoph asked me to
provide dma_iova_destroy(), so we can reuse same iotlb_gather.

Almost all callers (except HMM-like) will use this API call.

Let's keep it.

Thanks