Re: [PATCH 1/2] drm: add cache support for arm64
From: Christoph Hellwig
Date: Thu Aug 08 2019 - 05:55:14 EST
On Wed, Aug 07, 2019 at 10:48:56AM +0200, Daniel Vetter wrote:
> > other drm drivers how do they guarantee addressability without an
> > iommu?)
>
> We use shmem to get at swappable pages. We generally just assume that
> the gpu can get at those pages, but things fall apart in fun ways:
> - some setups somehow inject bounce buffers. Some drivers just give
> up, others try to allocate a pool of pages with dma_alloc_coherent.
> - some devices are misdesigned and can't access as much as the cpu. We
> allocate using GFP_DMA32 to fix that.
Well, for shmem you can't really call allocators directly, right?
One thing I have in my pipeline is a dma_alloc_pages API that allocates
pages that are guaranteed to be addressably by the device or otherwise
fail. But that doesn't really help with the shmem fs.
> Also modern gpu apis pretty much assume you can malloc() and then use
> that directly with the gpu.
Which is fine as long as the GPU itself supports full 64-bit addressing
(or always sits behind an iommu), and the platform doesn't impose
addressing limit, which unfortunately some that are shipped right now
still do :(
But userspace malloc really means dma_map_* anyway, so not really
relevant for memory allocations.