Re: [PATCH v3 1/2] drm/virtio: Add window server support
From: Tomeu Vizoso
Date: Thu Feb 15 2018 - 13:09:22 EST
On 02/12/2018 12:45 PM, Gerd Hoffmann wrote:>>>> 4. QEMU pops
data+buffers from the virtqueue, looks up shmem FD for each
>>>> resource, sends data + FDs to the compositor with SCM_RIGHTS
>>>
>>> BTW: Is there a 1:1 relationship between buffers and shmem blocks? Or
>>> does the wayland protocol allow for offsets in buffer meta data, so you
>>> can place multiple buffers in a single shmem block?
>>
>> The latter:
>>
https://wayland.freedesktop.org/docs/html/apa.html#protocol-spec-wl_shm_pool
>
> Ah, good, that makes it alot easier.
>
> So, yes, using ivshmem would be one option. Tricky part here is the
> buffer management though. It's just a raw piece of memory. The guest
> proxy could mmap the pci bar and manage it. But then it is again either
> unmodified guest + copying the data, or modified client (which requests
> buffers from guest proxy) for zero-copy.
What if at VIRTIO_GPU_CMD_RESOURCE_CREATE_2D time we created a ivshmem
device to back that resource. The ivshmem device would in turn be backed
by a hostmem device that wraps a shmem FD.
The guest client can then export that resource/BO and pass the FD to the
guest proxy. The guest proxy would import it and put the resource_id in
the equivalent message in our protocol extension.
QEMU would get that resource id from vsock, look up which hostmem device
is associated with that resource, and pass its FD to the compositor.
> We also need a solution for the keymap shmem block. I guess the keymap
> doesn't change all that often, so maybe it is easiest to just copy it
> over (host proxy -> guest proxy) instead of trying to map the host shmem
> into the guest?
Not sure if that would be much simpler than creating a ivshmem+hostmem
combo that wraps the incoming shmem FD and then having virtio-gpu create
a BO that imports it.
Regards,
Tomeu