RE: [PATCH v3 5/5] dma-iommu: account for min_align_mask
From: Mi, Dapeng1
Date: Wed Aug 11 2021 - 05:26:24 EST
> -----Original Message-----
> From: iommu <iommu-bounces@xxxxxxxxxxxxxxxxxxxxxxxxxx> On Behalf Of
> David Stevens
> Sent: Wednesday, August 11, 2021 10:43 AM
> To: Robin Murphy <robin.murphy@xxxxxxx>; Will Deacon <will@xxxxxxxxxx>
> Cc: linux-kernel@xxxxxxxxxxxxxxx; Tom Murphy <murphyt7@xxxxxx>;
> iommu@xxxxxxxxxxxxxxxxxxxxxxxxxx; David Stevens <stevensd@xxxxxxxxxxxx>
> Subject: [PATCH v3 5/5] dma-iommu: account for min_align_mask
>
> From: David Stevens <stevensd@xxxxxxxxxxxx>
>
> For devices which set min_align_mask, swiotlb preserves the offset of the
> original physical address within that mask. Since __iommu_dma_map
> accounts for non-aligned addresses, passing a non-aligned swiotlb address
> with the swiotlb aligned size results in the offset being accounted for twice in
> the size passed to iommu_map_atomic. The extra page exposed to DMA is
> also not cleaned up by __iommu_dma_unmap, since tht at function unmaps
> with the correct size. This causes mapping failures if the iova gets reused,
> due to collisions in the iommu page tables.
>
> To fix this, pass the original size to __iommu_dma_map, since that function
> already handles alignment.
>
> Additionally, when swiotlb returns non-aligned addresses, there is padding at
> the start of the bounce buffer that needs to be cleared.
>
> Fixes: 1f221a0d0dbf ("swiotlb: respect min_align_mask")
> Signed-off-by: David Stevens <stevensd@xxxxxxxxxxxx>
> ---
> drivers/iommu/dma-iommu.c | 23 ++++++++++++-----------
> 1 file changed, 12 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 89b689bf801f..ffa7e8ef5db4 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -549,9 +549,8 @@ static dma_addr_t
> __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys,
> struct iommu_domain *domain = iommu_get_dma_domain(dev);
> struct iommu_dma_cookie *cookie = domain->iova_cookie;
> struct iova_domain *iovad = &cookie->iovad;
> - size_t aligned_size = org_size;
> - void *padding_start;
> - size_t padding_size;
> + void *tlb_start;
> + size_t aligned_size, iova_off, mapping_end_off;
> dma_addr_t iova;
>
> /*
> @@ -566,24 +565,26 @@ static dma_addr_t
> __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys,
> if (phys == DMA_MAPPING_ERROR)
> return DMA_MAPPING_ERROR;
>
> - /* Cleanup the padding area. */
> - padding_start = phys_to_virt(phys);
> - padding_size = aligned_size;
> + iova_off = iova_offset(iovad, phys);
> + tlb_start = phys_to_virt(phys - iova_off);
>
> + /* Cleanup the padding area. */
> if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
> (dir == DMA_TO_DEVICE ||
> dir == DMA_BIDIRECTIONAL)) {
> - padding_start += org_size;
> - padding_size -= org_size;
> + mapping_end_off = iova_off + org_size;
> + memset(tlb_start, 0, iova_off);
> + memset(tlb_start + mapping_end_off, 0,
> + aligned_size - mapping_end_off);
> + } else {
> + memset(tlb_start, 0, aligned_size);
> }
Nice fix. It's better move the "cleanup ..." comment into if case which looks more accurate.