Re: [PATCH 1/2] [RESEND] media: v4l2-mem2mem: allow device run without buf

From: Sebastian Fricke
Date: Wed Jul 12 2023 - 05:44:38 EST


Hey Tomasz,

On 12.07.2023 09:31, Tomasz Figa wrote:
On Fri, Jul 07, 2023 at 03:14:23PM -0400, Nicolas Dufresne wrote:
Hi Randy,

Le mardi 04 juillet 2023 à 12:00 +0800, Hsia-Jun Li a écrit :
> From: Randy Li <ayaka@xxxxxxxxxxx>
>
> For the decoder supports Dynamic Resolution Change,
> we don't need to allocate any CAPTURE or graphics buffer
> for them at inital CAPTURE setup step.
>
> We need to make the device run or we can't get those
> metadata.
>
> Signed-off-by: Randy Li <ayaka@xxxxxxxxxxx>
> ---
> drivers/media/v4l2-core/v4l2-mem2mem.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/media/v4l2-core/v4l2-mem2mem.c b/drivers/media/v4l2-core/v4l2-mem2mem.c
> index 0cc30397fbad..c771aba42015 100644
> --- a/drivers/media/v4l2-core/v4l2-mem2mem.c
> +++ b/drivers/media/v4l2-core/v4l2-mem2mem.c
> @@ -301,8 +301,9 @@ static void __v4l2_m2m_try_queue(struct v4l2_m2m_dev *m2m_dev,
>
> dprintk("Trying to schedule a job for m2m_ctx: %p\n", m2m_ctx);
>
> - if (!m2m_ctx->out_q_ctx.q.streaming
> - || !m2m_ctx->cap_q_ctx.q.streaming) {
> + if (!(m2m_ctx->out_q_ctx.q.streaming || m2m_ctx->out_q_ctx.buffered)
> + || !(m2m_ctx->cap_q_ctx.q.streaming
> + || m2m_ctx->cap_q_ctx.buffered)) {

I have a two atches with similar goals in my wave5 tree. It will be easier to
upstream with an actual user, though, I'm probably a month or two away from
submitting this driver again.

https://gitlab.collabora.com/chipsnmedia/kernel/-/commit/ac59eafd5076c4deb3bfe1fb85b3b776586ef3eb
https://gitlab.collabora.com/chipsnmedia/kernel/-/commit/5de4fbe0abb20b8e8d862b654f93e3efeb1ef251


While I'm not going to NAK this series or those 2 patches if you send
them, I'm not really convinced that adding more and more complexity to
the mem2mem helpers is a good idea, especially since all of those seem
to be only needed by stateful video decoders.

The mem2mem framework started as a set of helpers to eliminate boiler
plate from simple drivers that always get 1 CAPTURE and 1 OUTPUT buffer,
run 1 processing job on them and then return both of the to the userspace
and I think it should stay like this.

I think we're strongly in need of a stateful video decoder framework that
would actually address the exact problems that those have rather than
bending something that wasn't designed with them in mind to work around the
differences.

Thanks for the feedback.

I have recently discussed how we could approach creating a framework for
the codecs side, with Hans Verkuil and Nicolas Dufresne.

The first step we would have to do is come up with a list of
requirements for that framework and expected future needs, maybe we can
start a public discussion on the mailing list to generate a list like
that.
But all in all this endeavor will probably require quite a bit of time
and effort, do you think we could modify M2M a bit for our use case and
then when we are in the process of creating the new framework, we could
maybe think about simplifying the M2M framework again?


Best regards,
Tomasz

Greetings,
Sebastian


Sebastien and I authored this without giving it much thought, but we believe
this massively simplify our handling of DRC (dynamic resolution change).

The main difference, is that we added ignore_streaming to the ctx, so that
drivers can opt-in the mode of operation. Thinking it would avoid any potential
side effects in drivers that aren't prepared to that. We didn't want to tied it
up to buffered, this is open to discussion of course, we do use buffered on both
queues and use a slightly more advance job_ready function, that take into
account our driver state.

In short, Sebastien and I agree this small change is the right direction, we
simply have a different implementation. I can send it as RFC if one believe its
would be useful now (even without a user).

> dprintk("Streaming needs to be on for both queues\n");
> return;
> }