Re: using DMA-API on ARM

From: Russell King - ARM Linux
Date: Mon Dec 08 2014 - 11:47:45 EST


On Mon, Dec 08, 2014 at 05:38:57PM +0100, Arnd Bergmann wrote:
> On Monday 08 December 2014 17:22:44 Arend van Spriel wrote:
> > >> The log: first the ring allocation info is printed. Starting at
> > >> 16.124847, ring 2, 3 and 4 are rings used for device to host. In this
> > >> log the failure is on a read of ring 3. Ring 3 is 1024 entries of each
> > >> 16 bytes. The next thing printed is the kernel page tables. Then some
> > >> OpenWRT info and the logging of part of the connection setup. Then at
> > >> 1780.130752 the logging of the failure starts. The sequence number is
> > >> modulo 253 with ring size of 1024 matches an "old" entry (read 40,
> > >> expected 52). Then the different pointers are printed followed by
> > >> the kernel page table. The code does then a cache invalidate on the
> > >> dma_handle and the next read the sequence number is correct.
> > >
> > > How do you invalidate the cache? A dma_handle is of type dma_addr_t
> > > and we don't define an operation for that, nor does it make sense
> > > on an allocation from dma_alloc_coherent(). What happens if you
> > > take out the invalidate?
> >
> > dma_sync_single_for_cpu(, DMA_FROM_DEVICE) which ends up invalidating
> > the cache (or that is our suspicion).
>
> I'm not sure about that:
>
> static void arm_dma_sync_single_for_cpu(struct device *dev,
> dma_addr_t handle, size_t size, enum dma_data_direction dir)
> {
> unsigned int offset = handle & (PAGE_SIZE - 1);
> struct page *page = pfn_to_page(dma_to_pfn(dev, handle-offset));
> __dma_page_dev_to_cpu(page, offset, size, dir);
> }
>
> Assuming a noncoherent linear (no IOMMU, no swiotlb, no dmabounce) mapping,
> dma_to_pfn will return the correct pfn here, but pfn_to_page will return a
> page pointer into the kernel linear mapping, which is not the same
> as the pointer you get from __alloc_remap_buffer(). The pointer that
> was returned from dma_alloc_coherent is a) non-cachable, and b) not the
> same that you flush here.

Having looked up the details of the Cortex CPU TRMs:

1. The caches are PIPT.
2. A non-cacheable mapping will not hit L1 cache lines which may be
allocated against the same physical address. (This is implementation
specific.)

So, the problem can't be the L1 cache, it has to be the L2 cache.

The L2 cache only deals with physical addresses, so it doesn't really
matter which mapping gets flushed - the result will be the same as far
as the L2 cache is concerned.

If bit 22 is not set in the auxcr, then a non-cacheable access can hit
a cache line which may be allocated in the L2 cache (which may have
been allocated via a speculative prefetch via the cacheable mapping.)

In the case which has been supplied, the physical address does indeed
have two mappings: it has a lowmem mapping which is cacheable, and it
has the DMA mapping which is marked as non-cacheable. Accesses via
the non-cacheable mapping will not hit L1 (that's an implementation
specific behaviour.) However, they may hit L2 if bit 22 is clear.

--
FTTC broadband for 0.8mile line: currently at 9.5Mbps down 400kbps up
according to speedtest.net.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/