Re: [PATCH 1/2] drm: add cache support for arm64
From: Christoph Hellwig
Date: Thu Aug 08 2019 - 03:58:33 EST
On Wed, Aug 07, 2019 at 05:49:59PM +0100, Mark Rutland wrote:
> I'm fairly confident that the linear/direct map cacheable alias is not
> torn down when pages are allocated. The gneeric page allocation code
> doesn't do so, and I see nothing the shmem code to do so.
It is not torn down anywhere.
> For arm64, we can tear down portions of the linear map, but that has to
> be done explicitly, and this is only possible when using rodata_full. If
> not using rodata_full, it is not possible to dynamically tear down the
> cacheable alias.
Interesting. For this or next merge window I plan to add support to the
generic DMA code to remap pages as uncachable in place based on the
openrisc code. AÑ far as I can tell the requirement for that is
basically just that the kernel direct mapping doesn't use PMD or bigger
mapping so that it supports changing protection bits on a per-PTE basis.
Is that the case with arm64 + rodata_full?
> > My understanding is that a cacheable alias is "ok", with some
> > caveats.. ie. that the cacheable alias is not accessed.
>
> Unfortunately, that is not true. You'll often get away with it in
> practice, but that's a matter of probability rather than a guarantee.
>
> You cannot prevent a CPU from accessing a VA arbitrarily (e.g. as the
> result of wild speculation). The ARM ARM (ARM DDI 0487E.a) points this
> out explicitly:
Well, if we want to fix this properly we'll have to remap in place
for dma_alloc_coherent and friends.