On Tue, Jun 22, 2021 at 11:42:27AM +0300, Oded Gabbay wrote:
On Tue, Jun 22, 2021 at 9:37 AM Christian KönigThat isn't an issue here because the memory is only intended to be
<ckoenig.leichtzumerken@xxxxxxxxx> wrote:
Am 22.06.21 um 01:29 schrieb Jason Gunthorpe:
On Mon, Jun 21, 2021 at 10:24:16PM +0300, Oded Gabbay wrote:The major problem with this approach is that DMA-buf is also used for
Another thing I want to emphasize is that we are doing p2p onlyArguably mmaping the memory is a better choice, and is the direction
through the export/import of the FD. We do *not* allow the user to
mmap the dma-buf as we do not support direct IO. So there is no access
to these pages through the userspace.
that Logan's series goes in. Here the use of DMABUF was specifically
designed to allow hitless revokation of the memory, which this isn't
even using.
memory which isn't CPU accessible.
used with P2P transfers so it must be CPU accessible.
Thanks Jason for the clarification, but I honestly prefer to use
DMA-BUF at the moment.
It gives us just what we need (even more than what we need as you
pointed out), it is *already* integrated and tested in the RDMA
subsystem, and I'm feeling comfortable using it as I'm somewhat
familiar with it from my AMD days.
Well, now we have DEVICE_PRIVATE memory that can meet this needThat was one of the reasons we didn't even considered using the mapping
memory approach for GPUs.
too.. Just nobody has wired it up to hmm_range_fault()
You still have the issue that this patch is doing all of this P2PSo you are taking the hit of very limited hardware support and reduced
performance just to squeeze into DMABUF..
stuff wrong - following the already NAK'd AMD approach.
I'll go and read Logan's patch-set to see if that will work for us inIt is trivial to get the struct page for a PCI BAR.
the future. Please remember, as Daniel said, we don't have struct page
backing our device memory, so if that is a requirement to connect to
Logan's work, then I don't think we will want to do it at this point.