Re: [PATCH 2/6] swiotlb: do not panic on mapping failures

From: Konrad Rzeszutek Wilk
Date: Fri May 18 2018 - 15:26:00 EST


On Tue, May 15, 2018 at 08:05:19PM +0200, Christoph Hellwig wrote:
> We now have error handling in map_single/map_page callers (most of them

Which ones are missing? Shouldn't we first fix those before we rip this out?

> anyway). As swiotlb_tbl_map_single already prints a useful warning
> when running out of swiotlb pool swace we can also remove swiotlb_full

s/swace/so/

> entirely as it serves no purpose now.
>
> Signed-off-by: Christoph Hellwig <hch@xxxxxx>
> ---
> lib/swiotlb.c | 33 +--------------------------------
> 1 file changed, 1 insertion(+), 32 deletions(-)
>
> diff --git a/lib/swiotlb.c b/lib/swiotlb.c
> index 721f93677eee..4d36340bc4f9 100644
> --- a/lib/swiotlb.c
> +++ b/lib/swiotlb.c
> @@ -763,34 +763,6 @@ static bool swiotlb_free_buffer(struct device *dev, size_t size,
> return true;
> }
>
> -static void
> -swiotlb_full(struct device *dev, size_t size, enum dma_data_direction dir,
> - int do_panic)
> -{
> - if (swiotlb_force == SWIOTLB_NO_FORCE)
> - return;
> -
> - /*
> - * Ran out of IOMMU space for this operation. This is very bad.
> - * Unfortunately the drivers cannot handle this operation properly.
> - * unless they check for dma_mapping_error (most don't)
> - * When the mapping is small enough return a static buffer to limit
> - * the damage, or panic when the transfer is too big.
> - */
> - dev_err_ratelimited(dev, "DMA: Out of SW-IOMMU space for %zu bytes\n",
> - size);
> -
> - if (size <= io_tlb_overflow || !do_panic)
> - return;
> -
> - if (dir == DMA_BIDIRECTIONAL)
> - panic("DMA: Random memory could be DMA accessed\n");
> - if (dir == DMA_FROM_DEVICE)
> - panic("DMA: Random memory could be DMA written\n");
> - if (dir == DMA_TO_DEVICE)
> - panic("DMA: Random memory could be DMA read\n");
> -}
> -
> /*
> * Map a single buffer of the indicated size for DMA in streaming mode. The
> * physical address to use is returned.
> @@ -819,10 +791,8 @@ dma_addr_t swiotlb_map_page(struct device *dev, struct page *page,
>
> /* Oh well, have to allocate and map a bounce buffer. */
> map = map_single(dev, phys, size, dir, attrs);
> - if (map == SWIOTLB_MAP_ERROR) {
> - swiotlb_full(dev, size, dir, 1);
> + if (map == SWIOTLB_MAP_ERROR)
> return __phys_to_dma(dev, io_tlb_overflow_buffer);
> - }
>
> dev_addr = __phys_to_dma(dev, map);
>
> @@ -950,7 +920,6 @@ swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl, int nelems,
> if (map == SWIOTLB_MAP_ERROR) {
> /* Don't panic here, we expect map_sg users
> to do proper error handling. */
> - swiotlb_full(hwdev, sg->length, dir, 0);
> attrs |= DMA_ATTR_SKIP_CPU_SYNC;
> swiotlb_unmap_sg_attrs(hwdev, sgl, i, dir,
> attrs);
> --
> 2.17.0
>