On Fri, Apr 06, 2018 at 03:36:03PM +0300, Oleksandr Andrushchenko wrote:Well, this is true. And at the same time this is just a matter
On 04/06/2018 02:57 PM, Gerd Hoffmann wrote:xen z-copy solution is pretty similar fundamentally to hyper_dmabuf
Hi,Thank you for your input, I'm just trying to figure out
Well, udmabuf route isn't fully clear yet, but yes.I fail to see any common ground for xen-zcopy and udmabuf ...Does the above mean you can assume that xen-zcopy and udmabuf
can co-exist as two different solutions?
See also gvt (intel vgpu), where the hypervisor interface is abstracted
away into a separate kernel modules even though most of the actual vgpu
emulation code is common.
which of the three z-copy solutions intersect and how much
And what about hyper-dmabuf?
in terms of these core sharing feature:
1. the sharing process - import prime/dmabuf from the producer -> extract
underlying pages and get those shared -> return references for shared pages
2. the page sharing mechanism - it uses Xen-grant-table.
And to give you a quick summary of differences as far as I understand
between two implementations (please correct me if I am wrong, Oleksandr.)
1. xen-zcopy is DRM specific - can import only DRM prime buffer
while hyper_dmabuf can export any dmabuf regardless of originator
This is true. Again, this is because of the use-cases it covers.
2. xen-zcopy doesn't seem to have dma-buf synchronization between two VMs
while (as danvet called it as remote dmabuf api sharing) hyper_dmabuf sends
out synchronization message to the exporting VM for synchronization.
To be precise, grant ref is 4 bytes
3. 1-level references - when using grant-table for sharing pages, there will
be same # of refs (each 8 byte)
as # of shared pages, which is passed toThe reason for that is that xen-zcopy is a helper driver, e.g.
the userspace to be shared with importing VM in case of xen-zcopy.
ComparedIn the protocol [2] only one reference to the gref directory is passed between VMs
to this, hyper_dmabuf does multiple level addressing to generate only one
reference id that represents all shared pages.
This is true, xen-zcopy has no means for inter VM sync and meta-data,
4. inter VM messaging (hype_dmabuf only) - hyper_dmabuf has inter-vm msg
communication defined for dmabuf synchronization and private data (meta
info that Matt Roper mentioned) exchange.
Again, xen-zcopy is decoupled from inter VM communication
5. driver-to-driver notification (hyper_dmabuf only) - importing VM gets
notified when newdmabuf is exported from other VM - uevent can be optionally
generated when this happens.
6. structure - hyper_dmabuf is targetting to provide a generic solution for
inter-domain dmabuf sharing for most hypervisors, which is why it has two
layers as mattrope mentioned, front-end that contains standard API and backend
that is specific to hypervisor.
Thank you, I'll have a lookwe started with simple dma-buf sharing but realized there are manyNo idea, didn't look at it in detail.
Looks pretty complex from a distant view. Maybe because it tries to
build a communication framework using dma-bufs instead of a simple
dma-buf passing mechanism.
things we need to consider in real use-case, so we added communication
, notification and dma-buf synchronization then re-structured it to
front-end and back-end (this made things more compicated..) since Xen
was not our only target. Also, we thought passing the reference for the
buffer (hyper_dmabuf_id) is not secure so added uvent mechanism later.
Yes, I am looking at it now, trying to figure out the full storyOne example is actually in github. If you want take a look at it, please
and its implementation. BTW, Intel guys were about to share some
test application for hyper-dmabuf, maybe I have missed one.
It could probably better explain the use-cases and the complexity
they have in hyper-dmabuf.
visit:
https://github.com/downor/linux_hyper_dmabuf_test/tree/xen/simple_export
If you think of xen-zcopy as a library (which implements XenI think we can definitely collaborate. Especially, maybe we are using someLike xen-zcopy it seems to depend on the idea that the hypervisorSo, for xen-zcopy we were not trying to make it generic,
manages all memory it is easy for guests to share pages with the help of
the hypervisor.
it just solves display (dumb) zero-copying use-cases for Xen.
We implemented it as a DRM helper driver because we can't see any
other use-cases as of now.
For example, we also have Xen para-virtualized sound driver, but
its buffer memory usage is not comparable to what display wants
and it works somewhat differently (e.g. there is no "frame done"
event, so one can't tell when the sound buffer can be "flipped").
At the same time, we do not use virtio-gpu, so this could probably
be one more candidate for shared dma-bufs some day.
Which simply isn't the case on kvm.Hm, I can imagine that: xen-zcopy could be a library code for hyper-dmabuf
hyper-dmabuf and xen-zcopy could maybe share code, or hyper-dmabuf build
on top of xen-zcopy.
in terms of implementing all that page sharing fun in multiple directions,
e.g. Host->Guest, Guest->Host, Guest<->Guest.
But I'll let Matt and Dongwon to comment on that.
outdated sharing mechanism/grant-table mechanism in our Xen backend (thanks
for bringing that up Oleksandr). However, the question is once we collaborate
somehow, can xen-zcopy's usecase use the standard API that hyper_dmabuf
provides? I don't think we need different IOCTLs that do the same in the final
solution.
[1] https://github.com/xen-troops/displ_becheers,Thank you,
Gerd
Oleksandr
P.S. Sorry for making your original mail thread to discuss things much
broader than your RFC...