Re: [PATCH 10/10] venus: dec: make decoder compliant with stateful codec API

From: Tomasz Figa
Date: Tue Feb 05 2019 - 04:31:27 EST

On Tue, Feb 5, 2019 at 6:00 PM Hans Verkuil <hverkuil@xxxxxxxxx> wrote:
> On 2/5/19 7:26 AM, Tomasz Figa wrote:
> > On Fri, Feb 1, 2019 at 12:18 AM Nicolas Dufresne <nicolas@xxxxxxxxxxxx> wrote:
> >>
> >> Le jeudi 31 janvier 2019 Ã 22:34 +0900, Tomasz Figa a Ãcrit :
> >>> On Thu, Jan 31, 2019 at 9:42 PM Philipp Zabel <p.zabel@xxxxxxxxxxxxxx> wrote:
> >>>> Hi Nicolas,
> >>>>
> >>>> On Wed, 2019-01-30 at 10:32 -0500, Nicolas Dufresne wrote:
> >>>>> Le mercredi 30 janvier 2019 Ã 15:17 +0900, Tomasz Figa a Ãcrit :
> >>>>>>> I don't remember saying that, maybe I meant to say there might be a
> >>>>>>> workaround ?
> >>>>>>>
> >>>>>>> For the fact, here we queue the headers (or first frame):
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> Then few line below this helper does G_FMT internally:
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> And just plainly fails if G_FMT returns an error of any type. This was
> >>>>>>> how Kamil designed it initially for MFC driver. There was no other
> >>>>>>> alternative back then (no EAGAIN yet either).
> >>>>>>
> >>>>>> Hmm, was that ffmpeg then?
> >>>>>>
> >>>>>> So would it just set the OUTPUT width and height to 0? Does it mean
> >>>>>> that gstreamer doesn't work with coda and mtk-vcodec, which don't have
> >>>>>> such wait in their g_fmt implementations?
> >>>>>
> >>>>> I don't know for MTK, I don't have the hardware and didn't integrate
> >>>>> their vendor pixel format. For the CODA, I know it works, if there is
> >>>>> no wait in the G_FMT, then I suppose we are being really lucky with the
> >>>>> timing (it would be that the drivers process the SPS/PPS synchronously,
> >>>>> and a simple lock in the G_FMT call is enough to wait). Adding Philipp
> >>>>> in CC, he could explain how this works, I know they use GStreamer in
> >>>>> production, and he would have fixed GStreamer already if that was
> >>>>> causing important issue.
> >>>>
> >>>> CODA predates the width/height=0 rule on the coded/OUTPUT queue.
> >>>> It currently behaves more like a traditional mem2mem device.
> >>>
> >>> The rule in the latest spec is that if width/height is 0 then CAPTURE
> >>> format is determined only after the stream is parsed. Otherwise it's
> >>> instantly deduced from the OUTPUT resolution.
> >>>
> >>>> When width/height is set via S_FMT(OUT) or output crop selection, the
> >>>> driver will believe it and set the same (rounded up to macroblock
> >>>> alignment) on the capture queue without ever having seen the SPS.
> >>>
> >>> That's why I asked whether gstreamer sets width and height of OUTPUT
> >>> to non-zero values. If so, there is no regression, as the specs mimic
> >>> the coda behavior.
> >>
> >> I see, with Philipp's answer it explains why it works. Note that
> >> GStreamer sets the display size on the OUTPUT format (in fact we pass
> >> as much information as we have, because a) it's generic code and b) it
> >> will be needed someday when we enable pre-allocation (REQBUFS before
> >> SPS/PPS is passed, to avoid the setup delay introduce by allocation,
> >> mostly seen with CMA base decoder). In any case, the driver reported
> >> display size should always be ignored in GStreamer, the only
> >> information we look at is the G_SELECTION for the case the x/y or the
> >> cropping rectangle is non-zero.
> >>
> >> Note this can only work if the capture queue is not affected by the
> >> coded size, or if the round-up made by the driver is bigger or equal to
> >> that coded size. I believe CODA falls into the first category, since
> >> the decoding happens in a separate set of buffers and are then de-tiled
> >> into the capture buffers (if understood correctly).
> >
> > Sounds like it would work only if coded size is equal to the visible
> > size (that GStreamer sets) rounded up to full macroblocks. Non-zero x
> > or y in the crop could be problematic too.
> >
> > Hans, what's your view on this? Should we require G_FMT(CAPTURE) to
> > wait until a format becomes available or the OUTPUT queue runs out of
> You mean CAPTURE queue? If not, then I don't understand that part.

No, I exactly meant the OUTPUT queue. The behavior of s5p-mfc in case
of the format not being detected yet is to waits for any pending
bitstream buffers to be processed by the decoder before returning an


> > buffers?
> First see my comment here regarding G_FMT returning an error:
> In my view that is a bad idea.

I don't like it either, but it seemed to be the most consistent and
compatible behavior, but I'm not sure anymore.

> What G_FMT should return between the time a resolution change was
> detected and the CAPTURE queue being drained (i.e. the old or the new
> resolution?) is something I am not sure about.

Note that we're talking here about the initial stream information
detection, when the driver doesn't have any information needed to
determine the CAPTURE format yet.

> On the one hand it is desirable to have the new resolution asap, on
> the other hand, returning the new resolution would mean that the
> returned format is inconsistent with the capture buffer sizes.
> I'm leaning towards either returning the new resolution.

Is the "or ..." part of the sentence missing?

One of the major concerns was that we needed to completely stall the
pipeline in case of a resolution change, which made it hard to deliver
a seamless transition to the users. An idea that comes to my mind
would be extending the source change event to actually include the
v4l2_format struct describing the new format. Then the CAPTURE queue
could keep the old format until it is drained, which should work fine
for existing applications, while the new ones could use the new event
data to determine if the buffers need to be reallocated.

<pipe dream>Ideally we would have all the metadata, including formats,
unified into a single property (or control) -like interface and tied
to buffers using Request API...</pipe dream>

Best regards,