Re: [PATCH 00/20] drm: Split out the formats API and move it to a common place
From: Laurent Pinchart
Date: Tue Apr 23 2019 - 11:54:32 EST
Hi Daniel,
On Tue, Apr 23, 2019 at 09:59:37AM +0100, Daniel Stone wrote:
> On Tue, 23 Apr 2019 at 08:26, Daniel Vetter <daniel@xxxxxxxx> wrote:
> > On Sun, Apr 21, 2019 at 01:59:04AM +0300, Laurent Pinchart wrote:
> >>>>> - drm fourcc code doesn't actually define the drm_format_info
> >>>>> uniquely, drivers can override that (that's an explicit design
> >>>>> intent of modifiers, to allow drivers to add another plane for
> >>>>> e.g. compression information). You'd need to pull that driver
> >>>>> knowledge into your format library.
> >>
> >> That's a mistake in my opinion. We tried that in V4L2 to store metadata
> >> in a separate plane, and had to go another route eventually as it
> >> created a very bad mess.
> >
> > Just quick clarification in the middle here: This is how the hw works.
> > It's not metadata that sw ever touches (in general, testcases to make sure
> > we display these correctly excepted).
> >
> > There has been some talking to add maybe a bit more mixed metadata, for
> > fast-clear colors (which isn't used by any display engine afaik yet). That
> > would generally be written by the cpu (in the gl stack), but again read by
> > the hw (loaded as indirect state packet most likely, or something like
> > that). So again hw specific layout, because the hw needs to read it.
> >
> > Pure metadata only of interest for the cpu/sw stack has been shot down
> > completely on the drm side too.
>
> Totally. Let's take DRM_FORMAT_XRGB8888 + I915_FORMAT_MOD_Y_TILED as
> an example. Here, there is one colour plane which is laid out in a
> documented tiled format, containing normal XRGB8888 pixels once you do
> the maths to get the correct pixel location. So that's fine.
>
> I915_FORMAT_MOD_Y_TILED_CCS has a base colour plane as above, but adds
> an auxiliary plane which has a few bits describing the state of every
> (differently-sized) tile. Before reading the tile from the colour
> plane, you look at the corresponding location in the auxiliary plane:
> if you read 0x55 from the auxiliary plane, then the entire cacheline
> is the value of the first pixel, i.e. a solid fill. Hardware takes
> advantage of this to only write out the first pixel: if you try to
> read the colour plane as Y_TILED then for solid-filled regions, only
> the first pixel of every tile will show correctly, and the rest will
> be garbage.
>
> The auxiliary plane has its own layout and placement requirements, so
> we need to carry around an offset and a stride for the auxiliary data.
> We already have this for multiple planes; stuffing it into the base
> plane would require us to reinvent the same for auxiliary data within
> a single plane.
Looks like we have different kinds of metadata to consider. On the V4L2
side metadata usually refers to the context in which a frame was
captured, it doesn't tell how to interpret the contents of the pixels.
In the case you just described, the metadata is part of the frame
contents. I agree that this is a proper use case for storing such
metadata in a plane. What I wouldn't like to see being stored in a plane
is for instance gamma tables or similar data that configures the
processing pipeline in the display engine (I know we have an API for
gamma tables, this is just an example).
> I understand at least one of the Tegra colour-compression layouts (for
> Tegra 1xx?) is similar to this.
>
> It would be good to understand what you had in mind when you said that
> using multiple planes created a mess. I haven't touched media
> encode/decode units at a low level for quite a while (hooray for
> gst-v4l2!), but I remember that they often used padding areas around
> the buffer for scratch space - maybe motion vectors or similar? That
> case is quite different to something like CCS, since the data is only
> meaningful to the media engine and must be ignored (but preserved) by
> everyone else. Using multiple planes in that case isn't appropriate,
> since it's very specific to how that hardware unit deals with that
> buffer, instead of something that every consumer needs to understand
> in order to use it.
With metadata unrelated to the pixel content, using a separate plane in
the same buffer resulted in an explosion of the number of combinations
that we would need to support, and ultimately led to a very ill-defined
API. We decided to convey metadata related to the frame capture context
(e.g. what exposure time was used for the frame) and processing pipeline
configuration data in different buffers than the frame itself.
--
Regards,
Laurent Pinchart