Re: [PATCH v6 4/5] dma-buf: heaps: Add CMA heap to dmabuf heaps

From: Andrew F. Davis
Date: Thu Jul 25 2019 - 14:06:57 EST


On 7/25/19 10:04 AM, Christoph Hellwig wrote:
> On Thu, Jul 25, 2019 at 09:31:50AM -0400, Andrew F. Davis wrote:
>> But that's just it, dma-buf does not assume buffers are backed by normal
>> kernel managed memory, it is up to the buffer exporter where and when to
>> allocate the memory. The memory backed by this SRAM buffer does not have
>> the normal struct page backing. So moving the map, sync, etc functions
>> to common code would fail for this and many other heap types. This was a
>> major problem with Ion that prompted this new design.
>
> The code clearly shows it has page backing, e.g. this:
>
> + sg_set_page(table->sgl, pfn_to_page(PFN_DOWN(buffer->paddr)), buffer->len, 0);
>
> and the fact that it (and the dma-buf API) uses scatterlists, which
> requires pages.
>

Pages yes, but not "normal" pages from the kernel managed area.
page_to_pfn() will return bad values on the pages returned by this
allocator and so will any of the kernel sync/map functions. Therefor
those operations cannot be common and need special per-heap handling.

Andrew