Re: [PATCH 06/21] powerpc: dma-mapping: minimize for_cpu flushing
From: Christophe Leroy
Date: Mon Mar 27 2023 - 08:57:41 EST
Le 27/03/2023 à 14:13, Arnd Bergmann a écrit :
> From: Arnd Bergmann <arnd@xxxxxxxx>
>
> The powerpc dma_sync_*_for_cpu() variants do more flushes than on other
> architectures. Reduce it to what everyone else does:
>
> - No flush is needed after data has been sent to a device
>
> - When data has been received from a device, the cache only needs to
> be invalidated to clear out cache lines that were speculatively
> prefetched.
>
> In particular, the second flushing of partial cache lines of bidirectional
> buffers is actively harmful -- if a single cache line is written by both
> the CPU and the device, flushing it again does not maintain coherency
> but instead overwrite the data that was just received from the device.
Hum ..... Who is right ?
That behaviour was introduced by commit 03d70617b8a7 ("powerpc: Prevent
memory corruption due to cache invalidation of unaligned DMA buffer")
I think your commit log should explain why that commit was wrong, and
maybe say that your patch is a revert of that commit ?
Christophe
>
> Signed-off-by: Arnd Bergmann <arnd@xxxxxxxx>
> ---
> arch/powerpc/mm/dma-noncoherent.c | 18 ++++--------------
> 1 file changed, 4 insertions(+), 14 deletions(-)
>
> diff --git a/arch/powerpc/mm/dma-noncoherent.c b/arch/powerpc/mm/dma-noncoherent.c
> index f10869d27de5..e108cacf877f 100644
> --- a/arch/powerpc/mm/dma-noncoherent.c
> +++ b/arch/powerpc/mm/dma-noncoherent.c
> @@ -132,21 +132,11 @@ void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
> switch (direction) {
> case DMA_NONE:
> BUG();
> - case DMA_FROM_DEVICE:
> - /*
> - * invalidate only when cache-line aligned otherwise there is
> - * the potential for discarding uncommitted data from the cache
> - */
> - if ((start | end) & (L1_CACHE_BYTES - 1))
> - __dma_phys_op(start, end, DMA_CACHE_FLUSH);
> - else
> - __dma_phys_op(start, end, DMA_CACHE_INVAL);
> - break;
> - case DMA_TO_DEVICE: /* writeback only */
> - __dma_phys_op(start, end, DMA_CACHE_CLEAN);
> + case DMA_TO_DEVICE:
> break;
> - case DMA_BIDIRECTIONAL: /* writeback and invalidate */
> - __dma_phys_op(start, end, DMA_CACHE_FLUSH);
> + case DMA_FROM_DEVICE:
> + case DMA_BIDIRECTIONAL:
> + __dma_phys_op(start, end, DMA_CACHE_INVAL);
> break;
> }
> }