Re: [RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory
From: Jason Gunthorpe
Date: Tue Apr 18 2017 - 19:22:33 EST
On Tue, Apr 18, 2017 at 03:51:27PM -0700, Dan Williams wrote:
> > This really seems like much less trouble than trying to wrapper all
> > the arch's dma ops, and doesn't have the wonky restrictions.
>
> I don't think the root bus iommu drivers have any business knowing or
> caring about dma happening between devices lower in the hierarchy.
Maybe not, but performance requires some odd choices in this code.. :(
> > Setting up the iommu is fairly expensive, so getting rid of the
> > batching would kill performance..
>
> When we're crossing device and host memory boundaries how much
> batching is possible? As far as I can see you'll always be splitting
> the sgl on these dma mapping boundaries.
Splitting the sgl is different from iommu batching.
As an example, an O_DIRECT write of 1 MB with a single 4K P2P page in
the middle.
The optimum behavior is to allocate a 1MB-4K iommu range and fill it
with the CPU memory. Then return a SGL with three entires, two
pointing into the range and one to the p2p.
It is creating each range which tends to be expensive, so creating two
ranges (or worse, if every SGL created a range it would be 255) is
very undesired.
Jason