Re: [PATCH v2 0/3] DCMI bridge support
From: Hugues FRUCHET
Date: Mon Jun 24 2019 - 06:10:42 EST
Hi Sakari,
> - Where's the sub-device representing the bridge itself?
This is pointed by [1]: drivers/media/i2c/st-mipid02.c
> - As the driver becomes MC-centric, crop configuration takes place
through
> V4L2 sub-device interface, not through the video device node.
> - Same goes for accessing sensor configuration: it does not take place
> through video node but through the sub-device nodes.
Our objective is to be able to support either a simple parallel sensor
or a CSI-2 sensor connected through a bridge without any changes on
userspace side because no additional processing or conversion involved,
only deserialisation is m.
With the proposed set of patches, we succeeded to do so, the same
non-regression tests campaign is passed with OV5640 parallel sensor
(STM32MP1 evaluation board) or OV5640 CSI-2 sensor (Avenger96 board with
D3 mezzanine board).
We don't want driver to be MC-centric, media controller support was
required only to get access to the set of functions needed to link and
walk trough subdevices: media_create_pad_link(),
media_entity_remote_pad(), etc...
We did a try with the v1 version of this patchset, delegating subdevices
handling to userspace, by using media-controller, but this require to
configure first the pipeline for each single change of resolution and
format before making any capture using v4l2-ctl or GStreamer, quite
heavy in fact.
Benjamin did another try using new libcamera codebase, but even for a
basic capture use-case, negotiation code is quite tricky in order to
match the right subdevices bus format to the required V4L2 format.
Moreover, it was not clear how to call libcamera library prior to any
v4l2-ctl or GStreamer calls.
Adding 100 lines of code into DCMI to well configure resolution and
formats fixes the point and allows us to keep backward compatibility
as per our objective, so it seems far more reasonable to us to do so
even if DCMI controls more than the subdevice it is connected to.
Moreover we found similar code in other video interfaces code like
qcom/camss/camss.c and xilinx/xilinx-dma.c, controlling the whole
pipeline, so it seems to us quite natural to go this way.
To summarize, if we cannot do the negotiation within kernel, delegating
this to userspace implies far more complexity and breaks compatibility
with existing applications without adding new functionalities.
Having all that in mind, what should be reconsidered in your opinion
Sakari ? Do you have some alternatives ?
Best regards,
Hugues.
On 6/20/19 6:17 PM, Sakari Ailus wrote:
> Hi Hugues,
>
> On Tue, Jun 11, 2019 at 10:48:29AM +0200, Hugues Fruchet wrote:
>> This patch serie allows to connect non-parallel camera sensor to
>> DCMI thanks to a bridge connected in between such as STMIPID02 [1].
>>
>> Media controller support is introduced first, then support of
>> several sub-devices within pipeline with dynamic linking
>> between them.
>> In order to keep backward compatibility with applications
>> relying on V4L2 interface only, format set on video node
>> is propagated to all sub-devices connected to camera interface.
>>
>> [1] https://www.spinics.net/lists/devicetree/msg278002.html
>
> General notes on the set, not related to any single patch:
>
> - Where's the sub-device representing the bridge itself?
>
> - As the driver becomes MC-centric, crop configuration takes place through
> V4L2 sub-device interface, not through the video device node.
>
> - Same goes for accessing sensor configuration: it does not take place
> through video node but through the sub-device nodes.
>