Re: [PATCH 1/2] drm: add cache support for arm64
From: Christoph Hellwig
Date: Tue Aug 06 2019 - 11:50:51 EST
On Tue, Aug 06, 2019 at 07:11:41AM -0700, Rob Clark wrote:
> Agreed that drm_cflush_* isn't a great API. In this particular case
> (IIUC), I need wb+inv so that there aren't dirty cache lines that drop
> out to memory later, and so that I don't get a cache hit on
> uncached/wc mmap'ing.
So what is the use case here? Allocate pages using the page allocator
(or CMA for that matter), and then mmaping them to userspace and never
touching them again from the kernel?
> Tying it in w/ iommu seems a bit weird to me.. but maybe that is just
> me, I'm certainly willing to consider proposals or to try things and
> see how they work out.
This was just my through as the fit seems easy. But maybe you'll
need to explain your use case(s) a bit more so that we can figure out
what a good high level API is.
> Exposing the arch_sync_* API and using that directly (bypassing
> drm_cflush_*) actually seems pretty reasonable and pragmatic. I did
> have one doubt, as phys_to_virt() is only valid for kernel direct
> mapped memory (AFAIU), what happens for pages that are not in kernel
> linear map? Maybe it is ok to ignore those pages, since they won't
> have an aliased mapping?
They could have an aliased mapping in vmalloc/vmap space for example,
if you created one. We have the flush_kernel_vmap_range /
invalidate_kernel_vmap_range APIs for those, that are implement
on architectures with VIVT caches.