Re: [RFC][PATCH 1/4] devicetree: bindings: Add linux,cma-heap tag for reserved memory

From: John Stultz
Date: Wed May 06 2020 - 14:34:55 EST


On Wed, May 6, 2020 at 10:35 AM Andrew F. Davis <afd@xxxxxx> wrote:
> On 5/6/20 12:30 PM, John Stultz wrote:
> > On Wed, May 6, 2020 at 9:04 AM Andrew F. Davis <afd@xxxxxx> wrote:
> >> On 5/4/20 4:50 AM, Brian Starkey wrote:
> >>> On Fri, May 01, 2020 at 11:40:16AM -0700, John Stultz wrote:
> >>>> So the name we expose is the CMA name itself. So with dt it will be
> >>>> the name of the reserved memory node that the flag property is added
> >>>> to.
> >>>>
> >>>
> >>> Yeah I'm just wondering if that's "stable" so we can say "the heap
> >>> will use the node name", or if saying that would cause us a headache
> >>> in the future.
> >>
> >>
> >> The issue is going to be this causes the node name in DT to become a
> >> kind of ABI. Right now until we have some userspace lib that enumerates
> >> the heaps in a stable way programs will hard-code the full heap name,
> >> which right now would look like:
> >>
> >> char *heap = "/dev/dma_heap/dma_heap_mem@89000000";
> >>
> >
> > If that's what the device chose to export.
> >
>
>
> Well no "device" exported it, we did mostly automatically using only DT

Sorry. By "device" I meant the board/phone/system.

> information. When making a DT I don't want to be thinking about how
> names will break userspace, for instance if node naming guidance is
> updated do apps suddenly stop working? That worries me a bit.

So when folks change an existing board/system's DT, that can cause
userland breakage. Be it firmware paths, or when folks moved things
under an soc{ } node. But at the same time, just like each system has
a different partition layout, each system may have different heaps,
and its up to a system level config in userland to provide the policy
of what is used where.

> > Eh. I don't see this as such an issue. On different systems we have
> > different device nodes. Some boards have more or fewer NICs, or
> > various partitions, etc. There has to be some device specific userland
> > config that determines which partitions are mounted where (this is my
> > "gralloc is fstab" thesis :)
> >
>
> Oh I agree here, net interface names and /dev/<hd> names have a history
> of changing, but those did both break a lot of apps. It could be argued
> they were abusing the API by making assumptions about the names, but we
> still have old scripts floating assuming "eth0" is going to just work..
>
> So the sooner we get this fstab scheme in place and in practice, the
> fewer apps in the wild will hard-code names.

Gralloc already exists on Android devices, you ask to allocate for a
use case, and it picks the heap. It could be *much* simpler (rather
than per-device implementations, I'm hoping to get to a single
implementation with a fstab like config file), but it's already widely
used.


> > I think with the heaps, qualities other than name are going to be
> > poorly specified or unenumerable, so any generic query interface is
> > going to fall down there (and be awful to use).
>
> Sure, so this "fstab" style config will have to be a mapping of names
> (which we will have to make static per heap in kernel) to properties

I'm not sure I'm following this static per-heap requirement bit . Can
you clarify?

> that interest the current users of a system. For now I can only think of
> cached/uncached, contiguous/sg, and secure/mappable. Then maybe a list
> of devices that can consume buffers of that variety, should allow for
> simple constraint matching. I'll need to think on this a bit more as the
> use-cases show up..

There's a lot of other cases that are common on Android. One CMA heap
might be sized and reserved for camera usage, so it doesn't have to
compete with other CMA users to quickly get a bunch of frames. Where
as another CMA heap might be specified for a HWFB if that has to be
contiguous. Again, it's less about the specific attributes
(contiguous/secure/etc - though those are important considerations
when creating the mapping for it to work properly), and more of a
higher level "for this use case or this pipeline, we use this heap"
mapping.

Just like an application might store data to /home/ which maps to to a
specific partition configured on a specific system, instead of
looking for things like "what partition has the most space".

thanks
-john