Re: [PATCH 2/2] media: v4l2-mem2mem: add a list for buf used by hw

From: Randy Li
Date: Thu Aug 03 2023 - 12:17:12 EST



On 2023/7/29 00:19, Nicolas Dufresne wrote:
Le vendredi 28 juillet 2023 à 15:37 +0800, Hsia-Jun Li a écrit :
I think this is one reason to migrate to the stateless decoder design.

I didn't know such plan here. I don't think the current stateless API
could export the reconstruction buffers for encoder or post-processing
buffer for decoder to us.
Someone suggested introduce auxiliary queues in our meeting in Lyon a while ago,
but I bet everyone got too busy with finalizing APIs, fixing fluster tests etc.
The suggestion felt like it would be possible to add it after the fact. This was
also being discussed in the context of supporting multi-scalers (standalone our
inline with the codec, like VC8000D+). It could also cover for primary and
secondary buffers, along with encoder primary, and reconstruction buffers, but
also auxiliary reference data. This would also be needed to properly support
Vulkan Video fwiw, and could also help with a transition to DMABuf Heaps and
memory accounting.

I've also had corridor discussion around having multi-instance media constroller
devices. It wasn't clear how to bind the media instance to the video node
instances, but assuming there is a way, it would be a tad more flexible (but
massively more complex).

I think we should answer to those questions before we decided what we want:

A. Should a queue only has the buffers for the same format and sizes?

B. How does an application handle those drivers requests additional queue?

C. How to sync multiple buffers in a v4l2 job.

I asked the same question A when I discuss this with media: v4l2: Add DELETE_BUF ioctl.

If we would not add extra queue here, how does the driver filter out the most proper buffer for the current hardware output(CAPTURE) buffer.

If we have multiple queues in a direction, how to make driver select between them?


The question B is the debt we made, some applications have gotten used to the case they can't control the lifetime of reconstruction buffer in encoding or turn the post-processing off when the display pipeline could support tile format output.

We know allow the userspace could decide where we allocate those buffers, but could the userspace decided not to handle their lifetime?


The question C may more be more related to the complex case like camera senor and ISP. With this auxiliary queue, multiple video nodes are not necessary anymore.

But ISP may not request all the data finish its path, ex. the ISP are not satisfied with the focus point that its senor detected or the light level, it may just drop the image data then shot again.

Also the poll event could only tell us which direction could do the dequeue/enqueue work, it can't tell us which queue is ready. Should we introduce something likes sync point(fence fd) here?


We may lead way to V4L3 as Tomasz suggested although I don't want to take the risk to be. If we would make a V4L3 like thing, we have better to decide it correct and could handle any future problem.


Nicolas