Re: [PATCH] media: docs: dev-decoder: add restrictions about CAPTURE buffers
From: Nicolas Dufresne
Date: Mon Nov 01 2021 - 10:45:44 EST
Le vendredi 29 octobre 2021 à 12:04 +0900, Tomasz Figa a écrit :
> On Tue, Oct 26, 2021 at 11:12 PM Nicolas Dufresne <nicolas@xxxxxxxxxxxx> wrote:
> >
> > Le lundi 18 octobre 2021 à 18:14 +0900, Alexandre Courbot a écrit :
> > > CAPTURE buffers might be read by the hardware after they are dequeued,
> > > which goes against the general idea that userspace has full control over
> > > dequeued buffers. Explain why and document the restrictions that this
> > > implies for userspace.
> > >
> > > Signed-off-by: Alexandre Courbot <acourbot@xxxxxxxxxxxx>
> > > ---
> > > .../userspace-api/media/v4l/dev-decoder.rst | 17 +++++++++++++++++
> > > 1 file changed, 17 insertions(+)
> > >
> > > diff --git a/Documentation/userspace-api/media/v4l/dev-decoder.rst b/Documentation/userspace-api/media/v4l/dev-decoder.rst
> > > index 5b9b83feeceb..3cf2b496f2d0 100644
> > > --- a/Documentation/userspace-api/media/v4l/dev-decoder.rst
> > > +++ b/Documentation/userspace-api/media/v4l/dev-decoder.rst
> > > @@ -752,6 +752,23 @@ available to dequeue. Specifically:
> > > buffers are out-of-order compared to the ``OUTPUT`` buffers): ``CAPTURE``
> > > timestamps will not retain the order of ``OUTPUT`` timestamps.
> > >
> > > +.. note::
> > > +
> > > + The backing memory of ``CAPTURE`` buffers that are used as reference frames
> > > + by the stream may be read by the hardware even after they are dequeued.
> > > + Consequently, the client should avoid writing into this memory while the
> > > + ``CAPTURE`` queue is streaming. Failure to observe this may result in
> > > + corruption of decoded frames.
> > > +
> > > + Similarly, when using a memory type other than ``V4L2_MEMORY_MMAP``, the
> > > + client should make sure that each ``CAPTURE`` buffer is always queued with
> > > + the same backing memory for as long as the ``CAPTURE`` queue is streaming.
> > > + The reason for this is that V4L2 buffer indices can be used by drivers to
> > > + identify frames. Thus, if the backing memory of a reference frame is
> > > + submitted under a different buffer ID, the driver may misidentify it and
> > > + decode a new frame into it while it is still in use, resulting in corruption
> > > + of the following frames.
> > > +
> >
> > I think this is nice addition, but insufficient. We should extend the API with a
> > flags that let application know if the buffers are reference or secondary. For
> > the context, we have a mix of CODEC that will output usable reference frames and
> > needs careful manipulation and many other drivers where the buffers *maybe*
> > secondary, meaning they may have been post-processed and modifying these in-
> > place may have no impact.
> >
> > The problem is the "may", that will depends on the chosen CAPTURE format. I
> > believe we should flag this, this flag should be set by the driver, on CAPTURE
> > queue. The information is known after S_FMT, so Format Flag, Reqbufs
> > capabilities or querybuf flags are candidates. I think the buffer flags are the
> > best named flag, though we don't expect this to differ per buffer. Though,
> > userspace needs to call querybuf for all buf in order to export or map them.
> >
> > What userspace can do with this is to export the DMABuf as read-only, and signal
> > this internally in its own context. This is great to avoid any unwanted side
> > effect described here.
>
> I agree with the idea of having a way for the kernel to tell the
> userspace the exact state of the buffer, but right now the untold
> expectation of the kernel was as per what this patch adds. If one
> wants their userspace to be portable across different decoders, they
> need to keep the assumption. So the natural way to go here is to stay
> safe by default and have a flag that tells the userspace that the
> buffer can be freely reused.
On the V4L2 side, this is what I am asking, a flag to signal that the buffer can
be freely reused (or secondary). The last part was an example of what userland
that cares about robustness can do with it.
>
> Best regards,
> Tomasz