Re: [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma
From: Jason Gunthorpe
Date: Wed Jan 30 2019 - 16:50:49 EST
On Wed, Jan 30, 2019 at 02:01:35PM -0700, Logan Gunthorpe wrote:
> And I feel the GUP->SGL->DMA flow should still be what we are aiming
> for. Even if we need a special GUP for special pages, and a special DMA
> map; and the SGL still has to be homogenous....
*shrug* so what if the special GUP called a VMA op instead of
traversing the VMA PTEs today? Why does it really matter? It could
easily change to a struct page flow tomorrow..
> > So, I see Jerome solving the GUP problem by replacing GUP entirely
> > using an API that is more suited to what these sorts of drivers
> > actually need.
>
> Yes, this is what I'm expecting and what I want. Not bypassing the whole
> thing by doing special things with VMAs.
IMHO struct page is a big pain for this application, and if we can
build flows that don't actually need it then we shouldn't require it
just because the old flows needed it.
HMM mirror is a new flow that doesn't need struct page.
Would you feel better if this also came along with a:
struct dma_sg_table *sgl_dma_map_user(struct device *dma_device,
void __user *prt, size_t len)
flow which returns a *DMA MAPPED* sgl that does not have struct page
pointers as another interface?
We can certainly call an API like this from RDMA for non-ODP MRs.
Eliminating the page pointers also eliminates the __iomem
problem. However this sgl object is not copyable or accessible from
the CPU, so the caller must be sure it doesn't need CPU access when
using this API.
For RDMA I'd include some flag in the struct ib_device if the driver
requires CPU accessible SGLs and call the right API. Maybe the block
layer could do the same trick for O_DIRECT?
This would also directly solve the P2P problem with hfi1/qib/rxe, as
I'd likely also say that pci_p2pdma_map_sg() returns the same DMA only
sgl thing.
Jason