On Thu, Sep 17, 2020 at 5:24 PM Jason Gunthorpe <jgg@xxxxxxxx> wrote:
On Thu, Sep 17, 2020 at 04:54:44PM +0200, Christian König wrote:Uh geez I didn't know amdgpu was doing that :-/
Am 17.09.20 um 16:35 schrieb Jason Gunthorpe:
On Thu, Sep 17, 2020 at 02:24:29PM +0200, Christian König wrote:Because the page fault handling is completely driver specific.
Am 17.09.20 um 14:18 schrieb Jason Gunthorpe:Sounds OK
On Thu, Sep 17, 2020 at 02:03:48PM +0200, Christian König wrote:Only a driver specific one.
Am 17.09.20 um 13:31 schrieb Jason Gunthorpe:BTW, while people are looking at this, is there a way to go from a VMA
On Thu, Sep 17, 2020 at 10:09:12AM +0200, Daniel Vetter wrote:Yeah, that is exactly like amdgpu is doing it.
Yeah, but it doesn't work when forwarding from the drm chardev to theI would think the pgoff has to be translated at the same time the
dma-buf on the importer side, since you'd need a ton of different
address spaces. And you still rely on the core code picking up your
pgoff mangling, which feels about as risky to me as the vma file
pointer wrangling - if it's not consistently applied the reverse map
is toast and unmap_mapping_range doesn't work correctly for our needs.
vm->vm_file is changed?
The owner of the dma_buf should have one virtual address space and FD,
all its dma bufs should be linked to it, and all pgoffs translated to
that space.
Going to document that somehow when I'm done with TTM cleanups.
to a dma_buf that owns it?
For TTM drivers vma->vm_private_data points to the buffer object. Not sureWhy are drivers in control of the vma? I would think dma_buf should be
about the drivers using GEM only.
the vma owner. IIRC module lifetime correctness essentially hings on
the module owner of the struct file
We could install some DMA-buf vmops, but that would just be another layer of
redirection.
Since this is on, I guess the inverse of trying to convert a userptr
into a dma-buf is properly rejected?
If it is already taking a page fault I'm not sure the extra functionNope. I think if you want this without some large scale rewrite of a
call indirection is going to be a big deal. Having a uniform VMA
sounds saner than every driver custom rolling something.
When I unwound a similar mess in RDMA all the custom VMA stuff in the
drivers turned out to be generally buggy, at least.
Is vma->vm_file->private_data universally a dma_buf pointer at least?
lot of code we'd need a vmops->get_dmabuf or similar. Not pretty, but
would get the job done.
Mostly historical reasons and "it's complicated". One problem is thatSo there is no general dma_buf service? That is a real bummerSo, user VA -> find_vma -> dma_buf object -> dma_buf operations on theAh, yes we are already doing this in amdgpu as well. But only for DMA-bufs
memory it represents
or more generally buffers which are mmaped by this driver instance.
dma-buf isn't a powerful enough interface that drivers could use it
for all their native objects, e.g. userptr doesn't pass through it,
and clever cache flushing tricks aren't allowed and a bunch of other
things. So there's some serious roadblocks before we could have a
common allocator (or set of allocators) behind dma-buf.
-Daniel