Re: [PATCH v1 1/1] iommu/dma: Make SG mapping and syncing robust against empty tables

From: Robin Murphy
Date: Tue May 28 2024 - 14:31:39 EST


On 2024-05-28 12:26 am, Andy Shevchenko wrote:
DMA mapping and syncing API might be called for the empty SG table where
number of the original entries is 0 and a pointer to SG list may be not
initialised at all. This all worked until the change to the code that
started dereferensing SG list without checking the number of the
original entries against 0. This might lead to the NULL pointer
dereference if the caller won't perform a preliminary check for that.
Statistically there are only a few cases in the kernel that do such a
check.

Yes, the ones which play the rather fragile trick of keeping a potentially-invalid scatterlist around and relying on its orig_nents value to be able to tell whether they're supposed to be using it or not. It's not the DMA API's job to hide bugs in callers trying to be clever but failing.

However, any attempt to make it alinged with the rest 99%+ cases
will be a regression due to above mentioned relatively recent change.
Instead of asking a caller to perform the checks, just return status quo
to SG mapping and syncing callbacks, so they won't crash on
uninitialised SG list.

Reported-by: Nícolas F. R. A. Prado <nfraprado@xxxxxxxxxxxxx>
Closes: https://lore.kernel.org/all/d3679496-2e4e-4a7c-97ed-f193bd53af1d@notapiano
Fixes: 861370f49ce4 ("iommu/dma: force bouncing if the size is not cacheline-aligned")
Fixes: 8cc3bad9d9d6 ("spi: Remove unneded check for orig_nents")
Signed-off-by: Andy Shevchenko <andriy.shevchenko@xxxxxxxxxxxxxxx>
---
drivers/iommu/dma-iommu.c | 9 +++++++++
1 file changed, 9 insertions(+)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index f731e4b2a417..83c9013aa341 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -1108,6 +1108,9 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev,
struct scatterlist *sg;
int i;
+ if (nelems < 1)
+ return;
+

This nicely illustrates how wrong said callers are, since even if it *were* legitimate to attempt to map a 0-length scatterlist, it still wouldn't be valid to sync it after the initial mapping didn't succeed.

Thanks,
Robin.

if (sg_dma_is_swiotlb(sgl))
for_each_sg(sgl, sg, nelems, i)
iommu_dma_sync_single_for_cpu(dev, sg_dma_address(sg),
@@ -1124,6 +1127,9 @@ static void iommu_dma_sync_sg_for_device(struct device *dev,
struct scatterlist *sg;
int i;
+ if (nelems < 1)
+ return;
+
if (sg_dma_is_swiotlb(sgl))
for_each_sg(sgl, sg, nelems, i)
iommu_dma_sync_single_for_device(dev,
@@ -1324,6 +1330,9 @@ static int iommu_dma_map_sg_swiotlb(struct device *dev, struct scatterlist *sg,
struct scatterlist *s;
int i;
+ if (nents < 1)
+ return nents;
+
sg_dma_mark_swiotlb(sg);
for_each_sg(sg, s, nents, i) {