I'm certainly open to this idea. There may be some technicalI would not use a data structure at all. Instead we should have somethingMy main interest has been what data structure is produced in the
attach APIs.
Eg today we have a struct dma_buf_attachment that returns a sg_table.
I'm expecting some kind of new data structure, lets call it "physical
list" that is some efficient coding of meta/addr/len tuples that works
well with the new DMA API. Matthew has been calling this thing phyr..
like an iterator/cursor based approach similar to what the new DMA API is
doing.
challenges, it is a big change from scatterlist today, and
function-pointer-per-page sounds like bad performance if there are
alot of pages..
RDMA would probably have to stuff this immediately into something like
a phyr anyhow because it needs to fully extent the thing being mapped
to figure out what the HW page size and geometry should be - that
would be trivial though, and a RDMA problem.
Note I said "populate a VMA", ie a helper to build the VMA PTEs only.That won't work like this.Now, if you are asking if the current dmabuf mmap callback can be
improved with the above? Maybe? phyr should have the neccessary
information inside it to populate a VMA - eventually even fully
correctly with all the right cachable/encrypted/forbidden/etc flags.
See the exporter needs to be informed about page faults on the VMA toAll of this would still have to be provided outside in the same way as
eventually wait for operations to end and sync caches.
today.
For example we have cases with multiple devices are in the same IOMMU domainIMHO this is just another flavour of "private" address flow between
and re-using their DMA address mappings.
two cooperating drivers.
It is not a "dma address" in the sense of a dma_addr_t that was output
from the DMA API. I think that subtle distinction is very
important. When I say pfn/dma address I'm really only talking about
standard DMA API flows, used by generic drivers.
IMHO, DMABUF needs a private address "escape hatch", and cooperating
drivers should do whatever they want when using that flow. The address
is *fully private*, so the co-operating drivers can do whatever they
want. iommu_map in exporter and pass an IOVA? Fine! pass a PFN and
iommu_map in the importer? Also fine! Private is private.
I remain skeptical of this.. Aside from all the technical reasons IBut in theory it should be possible to use phyr everywhere eventually, asI would rather say we should stick to DMA addresses as much as possible.
long as there's no obviously api-rules-breaking way to go from a phyr back to
a struct page even when that exists.
already outlined..
I think it is too much work to have the exporters conditionally build
all sorts of different representations of the same thing depending on
the importer. Like having alot of DRM drivers generate both a PFN and
DMA mapped list in their export code doesn't sound very appealing to
me at all.
It makes sense that a driver would be able to conditionally generate
private and generic based on negotiation, but IMHO, not more than one
flavour of generic..
Jason