On Mon, Mar 7, 2022 at 1:14 PM Robin Murphy <robin.murphy@xxxxxxx> wrote:
The "overlap" is in the sense of having more than one mapping within the
same cacheline:
[ 142.458120] DMA-API: add_dma_entry start P=ba79f200 N=ba79f
D=ba79f200 L=10 DMA_FROM_DEVICE attrs=0
[ 142.458156] DMA-API: add_dma_entry start P=445dc010 N=445dc
D=445dc010 L=10 DMA_TO_DEVICE attrs=0
[ 142.458178] sun8i-ss 1c15000.crypto: SRC 0/1/1 445dc000 len=16 bi=0
[ 142.458215] sun8i-ss 1c15000.crypto: DST 0/1/1 ba79f200 len=16 bi=0
[ 142.458234] DMA-API: add_dma_entry start P=ba79f210 N=ba79f
D=ba79f210 L=10 DMA_FROM_DEVICE attrs=0
This actually illustrates exactly the reason why this is unsupportable.
ba79f200 is mapped for DMA_FROM_DEVICE, therefore subsequently mapping
ba79f210 for DMA_TO_DEVICE may cause the cacheline covering the range
ba79f200-ba79f23f to be written back over the top of data that the
device has already started to write to memory. Hello data corruption.
Separate DMA mappings should be from separate memory allocations,
respecting ARCH_DMA_MINALIGN.
hmm... I know I'm missing something here, but how does this align with
the following from active_cacheline_insert() in kernel/dma/debug.c ?
/* If the device is not writing memory then we don't have any
* concerns about the cpu consuming stale data. This mitigates
* legitimate usages of overlapping mappings.
*/
if (entry->direction == DMA_TO_DEVICE)
return 0;