Re: using DMA-API on ARM

From: Catalin Marinas
Date: Mon Dec 08 2014 - 11:50:58 EST


On Mon, Dec 08, 2014 at 04:38:57PM +0000, Arnd Bergmann wrote:
> On Monday 08 December 2014 17:22:44 Arend van Spriel wrote:
> > >> The log: first the ring allocation info is printed. Starting at
> > >> 16.124847, ring 2, 3 and 4 are rings used for device to host. In this
> > >> log the failure is on a read of ring 3. Ring 3 is 1024 entries of each
> > >> 16 bytes. The next thing printed is the kernel page tables. Then some
> > >> OpenWRT info and the logging of part of the connection setup. Then at
> > >> 1780.130752 the logging of the failure starts. The sequence number is
> > >> modulo 253 with ring size of 1024 matches an "old" entry (read 40,
> > >> expected 52). Then the different pointers are printed followed by
> > >> the kernel page table. The code does then a cache invalidate on the
> > >> dma_handle and the next read the sequence number is correct.
> > >
> > > How do you invalidate the cache? A dma_handle is of type dma_addr_t
> > > and we don't define an operation for that, nor does it make sense
> > > on an allocation from dma_alloc_coherent(). What happens if you
> > > take out the invalidate?
> >
> > dma_sync_single_for_cpu(, DMA_FROM_DEVICE) which ends up invalidating
> > the cache (or that is our suspicion).
>
> I'm not sure about that:
>
> static void arm_dma_sync_single_for_cpu(struct device *dev,
> dma_addr_t handle, size_t size, enum dma_data_direction dir)
> {
> unsigned int offset = handle & (PAGE_SIZE - 1);
> struct page *page = pfn_to_page(dma_to_pfn(dev, handle-offset));
> __dma_page_dev_to_cpu(page, offset, size, dir);
> }
>
> Assuming a noncoherent linear (no IOMMU, no swiotlb, no dmabounce) mapping,
> dma_to_pfn will return the correct pfn here, but pfn_to_page will return a
> page pointer into the kernel linear mapping,

Or a highmem page, both should be handled by dma_cache_maint_page().

> which is not the same
> as the pointer you get from __alloc_remap_buffer(). The pointer that
> was returned from dma_alloc_coherent is a) non-cachable, and b) not the
> same that you flush here.

Correct. But apart from the fact that you don't need to flush buffers
allocated with dma_alloc_coherent(), the above sync_single would work on
ARMv7 where the D-cache is PIPT, so the virtual address doesn't matter
much as long as it maps the same physical address.

--
Catalin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/