Re: [PATCH V2] drm/amdgpu: limit DMA size to PAGE_SIZE for scatter-gather buffers

From: Robin Murphy
Date: Thu Apr 12 2018 - 09:51:12 EST


On 12/04/18 10:42, Christian KÃnig wrote:
Am 12.04.2018 um 08:26 schrieb Christoph Hellwig:
On Wed, Apr 11, 2018 at 01:03:59PM +0100, Robin Murphy wrote:
On 10/04/18 21:59, Sinan Kaya wrote:
Code is expecing to observe the same number of buffers returned from
dma_map_sg() function compared to sg_alloc_table_from_pages(). This
doesn't hold true universally especially for systems with IOMMU.
So why not fix said code? It's clearly not a real hardware limitation, and
the map_sg() APIs have potentially returned fewer than nents since forever,
so there's really no excuse.
Yes, relying on dma_map_sg returning the same number of entries as passed
it is completely bogus.

I agree that the common DRM functions should be able to deal with both, but we should let the driver side decide if it wants multiple addresses combined or not.


IOMMU driver tries to combine buffers into a single DMA address as much
as it can. The right thing is to tell the DMA layer how much combining
IOMMU can do.
Disagree; this is a dodgy hack, since you'll now end up passing
scatterlists into dma_map_sg() which already violate max_seg_size to begin
with, and I think a conscientious DMA API implementation would be at rights
to fail the mapping for that reason (I know arm64 happens not to, but that
was a deliberate design decision to make my life easier at the time).
Agreed.

Sounds like my understanding of max_seg_size is incorrect, what exactly should that describe?

The segment size and boundary mask are there to represent a device's hardware scatter-gather capabilities - a lot of things have limits on the size and alignment of the data pointed to by a single descriptor (e.g. an xHCI TRB, where the data buffer must not cross a 64KB boundary) - although they can also be relevant to non-scatter-gather devices if they just have limits on how big/aligned a single DMA transfer can be.

Robin.