On 2020/06/09 0:44 Robin Murphy <robin.murphy@xxxxxxx> wrote:
On 2020-06-08 16:31, Mark Brown wrote:Thanks for you post, but sorry, that's not spi-imx case now, because the rx data in device memory is not truly updated from 'device'/DMA, but from PIO, so that dma_sync_sg_for_cpu with DMA_FROM_DEVICE can't be used, otherwise the fresh data in cache will be invalidated.
On Mon, Jun 08, 2020 at 03:08:45PM +0000, Robin Gong wrote:device to ensure all received data flush to DDR.
+ if (transfer->rx_sg.sgl) {
+ struct device *rx_dev = spi->controller->dma_rx->device->dev;
+
+ dma_sync_sg_for_device(rx_dev, transfer->rx_sg.sgl,
+ transfer->rx_sg.nents, DMA_TO_DEVICE);
+ }
+
This is confusing - why are we DMA mapping to the device after doing
a PIO transfer?
'transfer->rx_sg.sgl' condition check that's the case fallback PIO
after DMA transfer failed. But the spi core still think the buffer
should be in 'device' while spi driver touch it by PIO(CPU), so sync it back to
So we sync it back to the device so that we can then do another sync
to CPU? TBH I'm a bit surprised that there's a requirement that we
explicitly undo a sync and that a redundant double sync in the same
direction might be an issue but I've not had a need to care so I'm
perfectly prepared to believe there is.
At the very least this needs a comment.
Yeah, something's off here - at the very least, syncing with DMA_TO_DEVICE on
the Rx buffer that was mapped with DMA_FROM_DEVICE is clearly wrong.
CONFIG_DMA_API_DEBUG should scream about that.
If the device has written to the buffer at all since dma_map_sg() was called
then you do need a dma_sync_sg_for_cpu() call before touching it from a CPU
fallback path, but if nobody's going to touch it from that point until it's
unmapped then there's no point syncing it again. The
my_card_interrupt_handler() example in DMA-API_HOWTO.txt demonstrates
this.
But you're right, kernel warning comes out if CONFIG_DMA_API_DEBUG enabled...