Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2

From: Chia-I Wu
Date: Wed Apr 17 2019 - 14:06:32 EST


On Wed, Apr 17, 2019 at 2:57 AM Gerd Hoffmann <kraxel@xxxxxxxxxx> wrote:
>
> On Fri, Apr 12, 2019 at 04:34:20PM -0700, Chia-I Wu wrote:
> > Hi,
> >
> > I am still new to virgl, and missed the last round of discussion about
> > resource_create_v2.
> >
> > From the discussion below, semantically resource_create_v2 creates a host
> > resource object _without_ any storage; memory_create creates a host memory
> > object which provides the storage. Is that correct?
>
> Right now all resource_create_* variants create a resource object with
> host storage. memory_create creates guest storage, and
> resource_attach_memory binds things together. Then you have to transfer
> the data.
In Gurchetan's Vulkan example, the host storage allocation happens in
(some variant of) memory_create, not in resource_create_v2. Maybe
that's what got me confused.

>
> Hmm, maybe we need a flag indicating that host storage is not needed,
> for resources where we want establish some kind of shared mapping later
> on.
This makes sense, to support both Vulkan and non-Vulkan models.

This differs from this patch, but I think a full-fledged resource
should logically have three components

- a RESOURCE component that has not storage
- a MEMORY component that provides the storage
- a BACKING component that is for transfers

resource_attach_backing sets the BACKING component. BACKING always
uses guest pages and supports only transfers into or out of MEMORY.

resource_attach_memory sets the MEMORY component. MEMORY can use host
or guest pages, and must always support GPU operations. When a MEMORY
is mappable in the guest, we can skip BACKING and achieve zero-copy.

resource_create_* can then get a flag to indicate whether only
RESOURCE is created or RESOURCE+MEMORY is created.


>
> > Do we expect these new commands to be supported by OpenGL, which does not
> > separate resources and memories?
>
> Well, for opengl you need a 1:1 relationship between memory region and
> resource.
>
> > > Yes, even though it is not clear yet how we are going to handle
> > > host-allocated buffers in the vhost-user case ...
> >
> > This might be another dumb question, but is this only an issue for
> > vhost-user(-gpu) case? What mechanisms are used to map host dma-buf into
> > the guest address space?
>
> qemu can change the address space, that includes mmap()ing stuff there.
> An external vhost-user process can't do this, it can only read the
> address space layout, and read/write from/to guest memory.
I thought vhost-user process can work with the host-allocated dmabuf
directly. That is,

qemu: injects dmabuf pages into guest address space
vhost-user: work with the dmabuf
guest: can read/write those pages

>
> > But one needs to create the resource first to know which memory types can
> > be attached to it. I think the metadata needs to be returned with
> > resource_create_v2.
>
> There is a resource_info reply for that.
>
> > That should be good enough. But by returning alignments, we can minimize
> > the gaps when attaching multiple resources, especially when the resources
> > are only used by GPU.
>
> We can add alignments to the resource_info reply.
>
> cheers,
> Gerd
>