On 29/04/2019 06:10, Lu Baolu wrote:
On 4/26/19 11:04 PM, Christoph Hellwig wrote:
On Thu, Apr 25, 2019 at 10:07:19AM +0800, Lu Baolu wrote:
This is not VT-d specific. It's just how generic IOMMU works.
Normally, IOMMU works in paging mode. So if a driver issues DMA with
IOVAÂ 0xAAAA0123, IOMMU can remap it with a physical address 0xBBBB0123.
But we should never expect IOMMU to remap 0xAAAA0123 with physical
address of 0xBBBB0000. That's the reason why I said that IOMMU will not
Well, with the iommu it doesn't happen.Â With swiotlb it obviosuly
can happen, so drivers are fine with it.Â Why would that suddenly
become an issue when swiotlb is called from the iommu code?
I would say IOMMU is DMA remapping, not DMA engine. :-)
I'm not sure I really follow the issue here - if we're copying the buffer to the bounce page(s) there's no conceptual difference from copying it to SWIOTLB slot(s), so there should be no need to worry about the original in-page offset.
From the reply up-thread I guess you're trying to include an optimisation to only copy the head and tail of the buffer if it spans multiple pages, and directly map the ones in the middle, but AFAICS that's going to tie you to also using strict mode for TLB maintenance, which may not be a win overall depending on the balance between invalidation bandwidth vs. memcpy bandwidth. At least if we use standard SWIOTLB logic to always copy the whole thing, we should be able to release the bounce pages via the flush queue to allow 'safe' lazy unmaps.
Either way I think it would be worth just implementing the straightforward version first, then coming back to consider optimisations later.