On Wed, Apr 25, 2018 at 08:34:55AM +0200, Daniel Vetter wrote:Great, if you come up with something then you'll be able
On Wed, Apr 25, 2018 at 09:07:07AM +0300, Oleksandr Andrushchenko wrote:Yeah, although we think we lowered the chance of guessing the right id
On 04/24/2018 11:35 PM, Dongwon Kim wrote:
Had a meeting with Daniel and talked about bringing out genericI will still have kernel side API, so backends/frontends implemented
part of hyper-dmabuf to the userspace, which means we most likely
reuse IOCTLs defined in xen-zcopy for our use-case if we follow
his suggestion.
in the kernel can access that functionality as well.
So assuming we use these IOCTLs as they are,Yes, it is generally unsafe. But even if we have implemented
Several things I would like you to double-check..
1. returning gref as is to the user space is still unsafe because
it is a constant, easy to guess and any process that hijacks it can easily
exploit the buffer. So I am wondering if it's possible to keep dmabuf-to
-gref or gref-to-dmabuf in kernel space and add other layers on top
of those in actual IOCTLs to add some safety.. We introduced flink like
hyper_dmabuf_id including random number but many says even that is still
not safe.
the approach you have in hyper-dmabuf or similar, what stops
malicious software from doing the same with the existing gntdev UAPI?
No need to brute force new UAPI if there is a simpler one.
That being said, I'll put security aside at the first stage,
but of course we can start investigating ways to improve
(I assume you already have use-cases where security issues must
be considered, so, probably you can tell more on what was investigated
so far).
by adding random number to it, the security hole is still there as far
as we use a constant id across VMs. We understood this from the beginning
but couldn't find a better way. So what we proposed is to make sure our
customer understand this and prepare very secure way to handle this id
in the userspace (mattrope however recently proposed a "hyper-pipe" which
FD-type id can be converted and exchanged safely through. So we are looking
into this now.)
And another approach we have proposed is to use event-polling, that lets
the privileged userapp in importing guest to know about a new exported
DMABUF so that it can retrieve it from the queue then redistribute to
other applications. This method is not very flexible however, is one way
to hide ID from userspace completely.
Anyway, yes, we can continue to investigate the possible way to make it
more secure.
I am not quite sure what and if those helpers be really needed.Maybe a bit more context here:ok, where would that helpers be located? If we consider we will use these
So in graphics we have this old flink approach for buffer sharing with
processes, and it's unsafe because way too easy to guess the buffer
handles. And anyone with access to the graphics driver can then import
that buffer object. We switched to file descriptor passing to make sure
only the intended recipient can import a buffer.
So at the vm->vm level it sounds like grefs are safe, because they're only
for a specific other guest (or sets of guests, not sure about). That means
security is only within the OS. For that you need to make sure that
unpriviledge userspace simply can't ever access a gref. If that doesn't
work out, then I guess we should improve the xen gref stuff to have a more
secure cookie.
There's already helpers for walking sgtables and adding pages/enumerating2. maybe we could take hypervisor-independent process (e.g. SGT<->page)I believe this can be done, but at the first stage I would go without
out of xen-zcopy and put those in a new helper library.
that helper library, so it is clearly seen what can be moved to it later
(I know that you want to run ACRN as well, but can I run it on ARM? ;)
pages. I don't think we need more.
with other hypervisor drivers, maybe it's better to place those in some
common area?
I won't be considering this use-case during prototyping asJust in general use-case.. I was just considering the case (might be corner3. please consider the case where original DMA-BUF's first offsetHm, what is the use-case for that?
and last length are not 0 and PAGE_SIZE respectively. I assume current
xen-zcopy only supports page-aligned buffer with PAGE_SIZE x n big.
case..) where sg->offset != 0 or sg->length != PAGE_SIZE. Hyper dmabuf sends
this information (first offset and last length) together with references for
pages. So I was wondering if we should so similar thing in zcopy since your
goal is now to cover general dma-buf use-cases (however, danvet mentioned
hard constaint of dma-buf below.. so if this can't happen according to the
spec, then we can ignore it..)
IMO, yes, see abovedma-buf is always page-aligned. That's a hard constraint of the linuxHmm.. I am little bit confused..
dma-buf interface spec.
-Daniel
So does it mean dmabuf->size is always n*PAGE_SIZE? What is the sgt behind
dmabuf has an offset other than 0 for the first sgl or the length of the
last sgl is not PAGE_SIZE? You are saying this case is not acceptable for
dmabuf?
--thanks,Thank you,
DW
Oleksandr
On Tue, Apr 24, 2018 at 02:59:39PM +0300, Oleksandr Andrushchenko wrote:_______________________________________________
On 04/24/2018 02:54 PM, Daniel Vetter wrote:
On Mon, Apr 23, 2018 at 03:10:35PM +0300, Oleksandr Andrushchenko wrote:I was lazy to change dumb to dma-buf, so put this notice ;)
On 04/23/2018 02:52 PM, Wei Liu wrote:
On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:If gntdev/balloon extension is still considered:
Oleksandr talked to me on IRC about this, he said a few IOCTLs need toSo, we have to decide either we introduce a new driverthe gntdev.You can't just wrap random userspace memory into a dma-buf. We've just had
I think this is generic enough that it could be implemented by a
device not tied to Xen. AFAICT the hyper_dma guys also wanted
something similar to this.
this discussion with kvm/qemu folks, who proposed just that, and after a
bit of discussion they'll now try to have a driver which just wraps a
memfd into a dma-buf.
(say, under drivers/xen/xen-dma-buf) or extend the existing
gntdev/balloon to support dma-buf use-cases.
Can anybody from Xen community express their preference here?
be added to either existing drivers or a new driver.
I went through this thread twice and skimmed through the relevant
documents, but I couldn't see any obvious pros and cons for either
approach. So I don't really have an opinion on this.
But, assuming if implemented in existing drivers, those IOCTLs need to
be added to different drivers, which means userspace program needs to
write more code and get more handles, it would be slightly better to
implement a new driver from that perspective.
All the IOCTLs will be in gntdev driver (in current xen-zcopy terminology):
Of course, please see aboveÂ- DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFSs/DUMB/DMA_BUF/ please. This is generic dma-buf, it has nothing to do with
Â- DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
Â- DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
the dumb scanout buffer support in the drm/gfx subsystem. This here can be
used for any zcopy sharing among guests (as long as your endpoints
understands dma-buf, which most relevant drivers do).
-Daniel
Balloon driver extension, which is needed for contiguous/DMA
buffers, will be to provide new *kernel API*, no UAPI is needed.
Wei.Thank you,
Oleksandr
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch