Re: Support for 2D engines/blitters in V4L2 and DRM
From: Nicolas Dufresne
Date: Wed Apr 24 2019 - 11:44:42 EST
Le mercredi 24 avril 2019 Ã 17:06 +0200, Daniel Vetter a Ãcrit :
> On Wed, Apr 24, 2019 at 4:41 PM Paul Kocialkowski
> <paul.kocialkowski@xxxxxxxxxxx> wrote:
> > Hi,
> >
> > On Wed, 2019-04-24 at 16:39 +0200, Michel DÃnzer wrote:
> > > On 2019-04-24 2:01 p.m., Nicolas Dufresne wrote:
> > > > Le mercredi 24 avril 2019 Ã 10:31 +0200, Michel DÃnzer a Ãcrit :
> > > > > On 2019-04-19 10:38 a.m., Paul Kocialkowski wrote:
> > > > > > On Thu, 2019-04-18 at 20:30 -0400, Nicolas Dufresne wrote:
> > > > > > > Le jeudi 18 avril 2019 Ã 10:18 +0200, Daniel Vetter a Ãcrit :
> > > > > > > In the first, we'd need a mechanism where we can schedule a render at a
> > > > > > > specific time or vblank. We can of course already implement this in
> > > > > > > software, but with fences, the scheduling would need to be done in the
> > > > > > > driver. Then if the fence is signalled earlier, the driver should hold
> > > > > > > on until the delay is met. If the fence got signalled late, we also
> > > > > > > need to think of a workflow. As we can't schedule more then one render
> > > > > > > in DRM at one time, I don't really see yet how to make that work.
> > > > > >
> > > > > > Indeed, that's also one of the main issues I've spotted. Before using
> > > > > > an implicit fence, we basically have to make sure the frame is due for
> > > > > > display at the next vblank. Otherwise, we need to refrain from using
> > > > > > the fence and schedule the flip later, which is kind of counter-
> > > > > > productive.
> > > > >
> > > > > Fences are about signalling that the contents of a frame are "done" and
> > > > > ready to be presented. They're not about specifying which frame is to be
> > > > > presented when.
> > > > >
> > > > >
> > > > > > I feel like specifying a target vblank would be a good unit for that,
> > > > >
> > > > > The mechanism described above works for that.
> > > > >
> > > > > > since it's our native granularity after all (while a timestamp is not).
> > > > >
> > > > > Note that variable refresh rate (Adaptive Sync / FreeSync / G-Sync)
> > > > > changes things in this regard. It makes the vblank length variable, and
> > > > > if you wait for multiple vblanks between flips, you get the maximum
> > > > > vblank length corresponding to the minimum refresh rate / timing
> > > > > granularity. Thus, it would be useful to allow userspace to specify a
> > > > > timestamp corresponding to the earliest time when the flip is to
> > > > > complete. The kernel could then try to hit that as closely as possible.
> > > >
> > > > Rendering a video stream is more complex then what you describe here.
> > > > Whenever there is a unexpected delay (late delivery of a frame as an
> > > > example) you may endup in situation where one frame is ready after the
> > > > targeted vblank. If there is another frame that targets the following
> > > > vblank that gets ready on-time, the previous frame should be replaced
> > > > by the most recent one.
> > > >
> > > > With fences, what happens is that even if you received the next frame
> > > > on time, naively replacing it is not possible, because we don't know
> > > > when the fence for the next frame will be signalled. If you simply
> > > > always replace the current frame, you may endup skipping a lot more
> > > > vblank then what you expect, and that results in jumpy playback.
> > >
> > > So you want to be able to replace a queued flip with another one then.
> > > That doesn't necessarily require allowing more than one flip to be
> > > queued ahead of time.
> >
> > There might be other ways to do it, but this one has plenty of
> > advantages.
>
> The point of kms (well one of the reasons) was to separate the
> implementation of modesetting for specific hw from policy decisions
> like which frames to drop and how to schedule them. Kernel gives
> tools, userspace implements the actual protocols.
>
> There's definitely a bit a gap around scheduling flips for a specific
> frame or allowing to cancel/overwrite an already scheduled flip, but
> no one yet has come up with a clear proposal for new uapi + example
> implementation + userspace implementation + big enough support from
> other compositors that this is what they want too.
>
> And yes writing a really good compositor is really hard, and I think a
> lot of people underestimate that and just create something useful for
> their niche. If userspace can't come up with a shared library of
> helpers, I don't think baking it in as kernel uapi with 10+ years
> regression free api guarantees is going to make it any better.
>
> > > Note that this can also be done in userspace with explicit fencing (by
> > > only selecting a frame and submitting it to the kernel after all
> > > corresponding fences have signalled), at least to some degree, but the
> > > kernel should be able to do it up to a later point in time and more
> > > reliably, with less risk of missing a flip for a frame which becomes
> > > ready just in time.
> >
> > Indeed, but it would be great if we could do that with implicit fencing
> > as well.
>
> 1. extract implicit fences from dma-buf. This part is just an idea,
> but easy to implement once we have someone who actually wants this.
> All we need is a new ioctl on the dma-buf to export the fences from
> the reservation_object as a sync_file (either the exclusive or the
> shared ones, selected with a flag).
> 2. do the exact same frame scheduling as with explicit fencing
> 3. supply explicit fences in your atomic ioctl calls - these should
> overrule any implicit fences (assuming correct kernel drivers, but we
> have helpers so you can assume they all work correctly).
>
> By design this is possible, it's just that no one yet bothered enough
> to make it happen.
> -Daniel
I'm not sure I understand the workflow of this one. I'm all in favour
leaving the hard work to userspace. Note that I have assumed explicit
fences from the start, I don't think implicit fence will ever exist in
v4l2, but I might be wrong. What I understood is that there was a
previous attempt in the past but it raised more issues then it actually
solved. So that being said, how do handle exactly the follow use cases:
- A frame was lost by capture driver, but it was schedule as being the
next buffer to render (normally previous frame should remain).
- The scheduled frame is late for the next vblank (didn't signal on-
time), a new one may be better for the next vlbank, but we will only
know when it's fence is signaled.
Better in this context means the the presentation time of this frame is
closer to the next vblank time. Keep in mind that the idea is to
schedule the frames before they are signal, in order to make the usage
of the fence useful in lowering the latency. Of course as Michel said,
we could just always wait on the fence and just schedule. But if you do
that, why would you care implementing the fence in v4l2 to start with,
DQBuf does just that already.
Note that this has nothing to do with the valid use case where you
would want to apply various transformations (m2m or gpu) on the capture
buffer. You still gain from the fence in the context, even if you wait
in userspace on the fence before display. This alone is likely enough
to justify using fences.
>
> > > > Render queues with timestamp are used to smooth rendering and handle
> > > > rendering collision so that the latency is kept low (like when you have
> > > > a 100fps video over a 60Hz display). This is normally done in
> > > > userspace, but with fences, you ask the kernel to render something in
> > > > an unpredictable future, so we loose the ability to make the final
> > > > decision.
> > >
> > > That's just not what fences are intended to be used for with the current
> > > KMS UAPI.
> >
> > Yes, and I think we're discussing towards changing that in the future.
> >
> > Cheers,
> >
> > Paul
> >
> > --
> > Paul Kocialkowski, Bootlin
> > Embedded Linux and kernel engineering
> > https://bootlin.com
> >
>
>
Attachment:
signature.asc
Description: This is a digitally signed message part