Re: [PATCHv4 0/2] Document memory-to-memory video codec interfaces
From: Nicolas Dufresne
Date: Wed Jul 03 2019 - 13:07:40 EST
Le mercredi 03 juillet 2019 Ã 18:04 +0900, Tomasz Figa a Ãcrit :
> On Wed, Jun 5, 2019 at 12:19 AM Nicolas Dufresne <nicolas@xxxxxxxxxxxx> wrote:
> > Le lundi 03 juin 2019 Ã 13:28 +0200, Hans Verkuil a Ãcrit :
> > > Since Thomasz was very busy with other things, I've taken over this
> > > patch series. This v4 includes his draft changes and additional changes
> > > from me.
> > >
> > > This series attempts to add the documentation of what was discussed
> > > during Media Workshops at LinuxCon Europe 2012 in Barcelona and then
> > > later Embedded Linux Conference Europe 2014 in DÃsseldorf and then
> > > eventually written down by Pawel Osciak and tweaked a bit by Chrome OS
> > > video team (but mostly in a cosmetic way or making the document more
> > > precise), during the several years of Chrome OS using the APIs in
> > > production.
> > >
> > > Note that most, if not all, of the API is already implemented in
> > > existing mainline drivers, such as s5p-mfc or mtk-vcodec. Intention of
> > > this series is just to formalize what we already have.
> > >
> > > Thanks everyone for the huge amount of useful comments to previous
> > > versions of this series. Much of the credits should go to Pawel Osciak
> > > too, for writing most of the original text of the initial RFC.
> > >
> > > This v4 incorporates all known comments (let me know if I missed
> > > something!) and should be complete for the decoder.
> > >
> > > For the encoder there are two remaining TODOs for the API:
> > >
> > > 1) Setting the frame rate so bitrate control can make sense, since
> > > they need to know this information.
> > >
> > > Suggested solution: require support for ENUM_FRAMEINTERVALS for the
> > > coded pixelformats and S_PARM(OUTPUT). Open question: some drivers
> > > (mediatek, hva, coda) require S_PARM(OUTPUT), some (venus) allow both
> > > S_PARM(CAPTURE) and S_PARM(OUTPUT). I am inclined to allow both since
> > > this is not a CAPTURE vs OUTPUT thing, it is global to both queues.
> >
> > I agree, as long as it's documented. I can imagine how this could be
> > confusing for new users.
> >
> > > 2) Interactions between OUTPUT and CAPTURE formats.
> > >
> > > The main problem is what to do if the capture sizeimage is too small
> > > for the OUTPUT resolution when streaming starts.
> > >
> > > Proposal: width and height of S_FMT(OUTPUT) are used to
> > > calculate a minimum sizeimage (app may request more). This is
> > > driver-specific.
> > >
> > > V4L2_FMT_FLAG_FIXED_RESOLUTION is always set for codec formats
> > > for the encoder (i.e. we don't support mid-stream resolution
> > > changes for now) and V4L2_EVENT_SOURCE_CHANGE is not
> > > supported. See https://patchwork.linuxtv.org/patch/56478/ for
> > > the patch adding this flag.
> > >
> > > Of course, if we start to support mid-stream resolution
> > > changes (or other changes that require a source change event),
> > > then this flag should be dropped by the encoder driver and
> > > documentation on how to handle the source change event should
> > > be documented in the encoder spec. I prefer to postpone this
> > > until we have an encoder than can actually do mid-stream
> > > resolution changes.
> > >
> > > If sizeimage of the OUTPUT is too small for the CAPTURE
> > > resolution and V4L2_EVENT_SOURCE_CHANGE is not supported,
> > > then the second STREAMON (either CAPTURE or OUTPUT) will
> > > return -ENOMEM since there is not enough memory to do the
> > > encode.
> >
> > You seem confident that we will know immediately if it's too small. But
> > what I remember is that HW has an interrupt for this, allowing
> > userspace to allocate a larger buffer and resume.
> >
> > Should we make the capture queue independent of the streaming state, so
> > that we can streamoff/reqbufs/.../streamon to resume from an ENOMEM
> > error ? And shouldn't ENOMEM be returned by the following capture DQBUF
> > when such an interrupt is raised ?
> >
>
> The idea was that stopping the CAPTURE queue would reset the encoder,
> i.e. start encoding a new, independent stream after the streaming
> starts again. Still, given that one would normally need to reallocate
> the buffers on some significant stream parameter change, that would
> normally require emitting all the relevant headers anyway, so it
> probably doesn't break anything?
The capture buffer size is a prediction, so even without any parameters
changes, the size could become insufficient. On the other hand, we have
managed to predict quite well so far in many applications.
Note, I didn't remember that encoder CAPTURE queue streamoff was the
one triggering the reset. In GStreamer, I always streamoff both, so I
never had to think about this. One thing is clear though, it will be
really hard to extend later with this hard relationship between
allocation, streaming state and encoder state. I'm sure we can survive
this, there is probably plenty of workaround, including spreading
encoded data across multiple buffer as Hans suggested.
>
> Best regards,
> Tomasz
Attachment:
signature.asc
Description: This is a digitally signed message part