Hi,
Is this only going to support accelerated driver output, not basicRight now there is no vgabios or uefi support for the vgpu.
VGA
modes for BIOS interaction?
But even with that in place there still is the problem that the display
device initialization happens before the guest runs and therefore there
isn't an plane yet ...
So maybe a "enum plane_state" (instead of "bool is_enabled")? So weRight now the experimental intel patches throw errors in case noSure, or -ENOTTY for ioctl not implemented vs -EINVAL for no
plane
exists (yet). Maybe we should have a "bool is_enabled" field in
the
plane_info struct, so drivers can use that to signal whenever the
guest
has programmed a valid video mode or not (likewise for the cursor,
which doesn't exist with fbcon, only when running xorg). With that
in
place using the QUERY_PLANE ioctl also for probing looks
reasonable.
available
plane, but then that might not help the user know how a plane would
be
available if it were available.
can clearly disturgish ENABLED, DISABLED, NOT_SUPPORTED cases?
kernel wouldn't know either, only the guest knows ...Yes, I'd leave that to userspace. So, when the generation changesBut userspace also doesn't know that a dmabuf generation will ever be
userspace knows the guest changed the plane. It could be a
configuration the guest has used before (and where userspace could
have
a cached dma-buf handle for), or it could be something new.
visited again,
so they're bound to have some stale descriptors. AreYep, this is exactly what my qemu patches are doing, keep a LRU list.
we thinking userspace would have some LRU list of dmabufs so that
they
don't collect too many? Each uses some resources, do we just rely
on
open file handles to set an upper limit?
Not really. Host IGD has a certain amount of memory, some of it isSo the resources the user is holding if they don't release theirWhat happens toDepends on what the guest is doing ;)
existing dmabuf fds when the generation updates, do they stop
refreshing?
The dma-buf is just a host-side handle for the piece of video
memory
where the guest stored the framebuffer.
dmabuf
are potentially non-trivial.
assigned to the guest, guest stores the framebuffer there, the dma-buf
is a host handle (drm object, usable for rendering ops) for the guest
framebuffer. So it doesn't use much ressources. Some memory is needed
for management structs, but not for the actual data as this in the
video memory dedicated to the guest.
I'd suggest to settle for one of these two. Either one region andOk, if we want support multiple regions. Do we? Using the offsetI don't want to take a driver ioctl interface as a throw away, one
we
can place multiple planes in a single region. And I'm not sure
nvidia
plans to use multiple planes in the first place ...
time
use exercise. If we can think of such questions now, let's define
how
they work. A device could have multiple graphics regions with
multiple
planes within each region.
multiple planes inside (using offset) or one region per plane. I'd
prefer the former. When going for the latter then yes we have to
specify the region. I'd name the field region_id then to make clear
what it is.
What would be the use case for multiple planes?
cursor support? We already have plane_type for that.
multihead support? We'll need (at minimum) a head_id field for that
(for both dma-buf and region)
pageflipping support? Nothing needed, query_plane will simply return
the currently visible plane. Region only needs to be big enough to fit
the framebuffer twice. Then the driver can flip between two buffers,
point to the one qemu should display using "offset".
Do we also want to exclude that deviceVery unlikely IMHO.
needs to be strictly region or dmabuf? Maybe it does both.
Or maybePossibly happens some day, but who knows what interfaces we'll need to
it supports dmabuf-ng (ie. whatever comes next).
support that ...
The limited ioctl number space is a good reason indeed.vfio_device_query {We don't have an infinite number of ioctls
u32 argsz;
u32 flags;
enum query_type; /* or use flags for that */
Ok, lets take this route then.
cheers,
Gerd
_______________________________________________
intel-gvt-dev mailing list
intel-gvt-dev@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gvt-dev