Subject:There may be some misunderstanding.
Re: [RFC]: shmem fd for non-DMA buffer sharing cross drivers
From:
Pekka Paalanen <ppaalanen@xxxxxxxxx>
Date:
8/25/23, 15:40
To:
Hsia-Jun Li <Randy.Li@xxxxxxxxxxxxx>
CC:
Tomasz Figa <tfiga@xxxxxxxxxxxx>, linux-mm@xxxxxxxxx, dri-devel@xxxxxxxxxxxxxxxxxxxxx, Linux Media Mailing List <linux-media@xxxxxxxxxxxxxxx>, hughd@xxxxxxxxxx, akpm@xxxxxxxxxxxxxxxxxxxx, Simon Ser <contact@xxxxxxxxxxx>, Hans Verkuil <hverkuil-cisco@xxxxxxxxx>, daniels@xxxxxxxxxxxxx, ayaka <ayaka@xxxxxxxxxxx>, linux-kernel@xxxxxxxxxxxxxxx, Nicolas Dufresne <nicolas@xxxxxxxxxxxx>
On Wed, 23 Aug 2023 15:11:23 +0800
Hsia-Jun Li<Randy.Li@xxxxxxxxxxxxx> wrote:
On 8/23/23 12:46, Tomasz Figa wrote:Hi,
CAUTION: Email originated externally, do not click links or open attachments unless you recognize the sender and know the content is safe.1. metadata likes dynamic HDR tone data
Hi Hsia-Jun,
On Tue, Aug 22, 2023 at 8:14 PM Hsia-Jun Li<Randy.Li@xxxxxxxxxxxxx> wrote:
HelloIf the metadata isn't too big, would it be enough to just have the
I would like to introduce a usage of SHMEM slimier to DMA-buf, the major
purpose of that is sharing metadata or just a pure container for cross
drivers.
We need to exchange some sort of metadata between drivers, likes dynamic
HDR data between video4linux2 and DRM.
kernel copy_from_user() to a kernel buffer in the ioctl code?
Or the graphics frame buffer isCould you explain how a shmem buffer could be used to support frame
too complex to be described with plain plane's DMA-buf fd.
An issue between DRM and V4L2 is that DRM could only support 4 planes
while it is 8 for V4L2. It would be pretty hard for DRM to expend its
interface to support that 4 more planes which would lead to revision of
many standard likes Vulkan, EGL.
buffers with more than 4 planes?
If you are asking why we need this:
2. DRM also challenges with this problem, let me quote what sima said:
"another trick that we iirc used for afbc is that sometimes the planes
have a fixed layout
like nv12
and so logically it's multiple planes, but you only need one plane slot
to describe the buffer
since I think afbc had the "we need more than 4 planes" issue too"
Unfortunately, there are vendor pixel formats are not fixed layout.
3. Secure(REE, trusted video piepline) info.
For how to assign such metadata data.
In case with a drm fb_id, it is simple, we just add a drm plane property
for it. The V4L2 interface is not flexible, we could only leave into
CAPTURE request_fd as a control.
I just want to say it can't be allocated at the same place which was forAlso, there is no reason to consume a device's memory for the contentThat's right, but DMA-buf doesn't really imply any of those. DMA-buf
that device can't read it, or wasting an entry of IOMMU for such data.
is just a kernel object with some backing memory. It's up to the
allocator to decide how the backing memory is allocated and up to the
importer on whether it would be mapped into an IOMMU.
those DMA bufs(graphics or compressed bitstream).
This also could be answer for your first question, if we place this kind
of buffer in a plane for DMABUF(importing) in V4L2, V4L2 core would try
to prepare it, which could map it into IOMMU.
I don't want the userspace access it at all. So that won't be a problem.Usually, such a metadata would be the value should be written to aThis is generally impossible without doing any of the two:
hardware's registers, a 4KiB page would be 1024 items of 32 bits registers.
Still, I have some problems with SHMEM:
1. I don't want the userspace modify the context of the SHMEM allocated
by the kernel, is there a way to do so?
1) copying the contents to an internal buffer not accessible to the
userspace, OR
2) modifying any of the buffer mappings to read-only
2) can actually be more costly than 1) (depending on the architecture,
data size, etc.), so we shouldn't just discard the option of a simple
copy_from_user() in the ioctl.
if userspace cannot access things like an image's HDR metadata, then it
will be impossible for userspace to program KMS to have the correct
color pipeline, or to send intended HDR metadata to a video sink.
You cannot leave userspace out of HDR metadata handling, because quite
probably the V4L2 buffer is not the only thing on screen. That means
there must composition of multiple sources with different image
properties and metadata, which means it is no longer obvious what HDR
metadata should be sent to the video sink.
Even if it is a TV-like application rather than a windowed desktop, you
will still have other contents to composite: OSD (volume indicators,
channels indicators, program guide, ...), sub-titles, channel logos,
notifications... These components ideally should not change their
appearance arbitrarily with the main program content and metadata
changes. Either the metadata sent to the video sink is kept static and
the main program adapted on the fly, or main program metadata is sent
to the video sink and the additional content is adapted on the fly.
There is only one set of HDR metadata and one composited image that can
be sent to a video sink, so both must be chosen and produced correctly
at the source side. This cannot be done automatically inside KMS kernel
drivers.
Thanks,
pq
It is the kernel driver that allocate this buffer. For example, v4l22. Should I create a helper function for installing the SHMEM file as a fd?We already have the udmabuf device [1] to turn a memfd into a DMA-buf,
so maybe that would be enough?
[1]https://elixir.bootlin.com/linux/v6.5-rc7/source/drivers/dma-buf/udmabuf.c
CAPTURE allocate a buffer for metadata when VIDIOC_REQBUFS.
Or GBM give you a fd which is assigned with a surface.
So we need a kernel interface.
Best,
Tomasz
--
Hsia-Jun(Randy) Li