Re: For the problem when using swiotlb
From: Catalin Marinas
Date: Fri Nov 21 2014 - 06:36:32 EST
On Fri, Nov 21, 2014 at 11:26:45AM +0000, Arnd Bergmann wrote:
> On Friday 21 November 2014 11:06:10 Catalin Marinas wrote:
> > On Wed, Nov 19, 2014 at 03:56:42PM +0000, Arnd Bergmann wrote:
> > > On Wednesday 19 November 2014 15:46:35 Catalin Marinas wrote:
> > > > Going back to original topic, the dma_supported() function on arm64
> > > > calls swiotlb_dma_supported() which actually checks whether the swiotlb
> > > > bounce buffer is within the dma mask. This transparent bouncing (unlike
> > > > arm32 where it needs to be explicit) is not always optimal, though
> > > > required for 32-bit only devices on a 64-bit system. The problem is when
> > > > the driver is 64-bit capable but forgets to call
> > > > dma_set_mask_and_coherent() (that's not the only question I got about
> > > > running out of swiotlb buffers).
> > >
> > > I think it would be nice to warn once per device that starts using the
> > > swiotlb. Really all 32-bit DMA masters should have a proper IOMMU
> > > attached.
> > It would be nice to have a dev_warn_once().
> > I think it makes sense on arm64 to avoid swiotlb bounce buffers for
> > coherent allocations altogether. The __dma_alloc_coherent() function
> > already checks coherent_dma_mask and sets GFP_DMA accordingly. If we
> > have a device that cannot even cope with a 32-bit ZONE_DMA, we should
> > just not support DMA at all on it (without an IOMMU). The arm32
> > __dma_supported() has a similar check.
> If we ever encounter this case, we may have to add a smaller ZONE_DMA
> and use ZONE_DMA32 for the normal dma allocations.
Traditionally on x86 I think ZONE_DMA was for ISA and ZONE_DMA32 had to
cover the 32-bit physical address space. On arm64 we don't expect ISA,
so we only use ZONE_DMA (which is 4G, similar to IA-64, sparc). We had
ZONE_DMA32 originally but it broke swiotlb which assumes ZONE_DMA for
its bounce buffer.
> > Swiotlb is still required for the streaming DMA since we get bouncing
> > for pages allocated outside the driver control (e.g. VFS layer which
> > doesn't care about GFP_DMA), hoping a 16M bounce buffer would be enough.
> > Ding seems to imply that CMA fixes the problem, which means that the
> > issue is indeed coherent allocations.
> I wonder what's going on here, since swiotlb_alloc_coherent() actually
> tries a regular __get_free_pages(flags, order) first, and when ZONE_DMA
> is set here, it just work without using the pool.
As long as coherent_dma_mask is sufficient for ZONE_DMA. I have no idea
what this mask is set to in Ding's case (but I've seen the problem
previously with an out of tree driver where coherent_dma_mask was some
random number; so better reporting here would help).
There could be another case where dma_pfn_offset is required but let's
wait for some more info from Ding (ZONE_DMA is 32-bit from the start of
RAM which could be 40-bit like on Seattle, so basically such devices
would need to set dma_pfn_offset).
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/