The default segment_boundary_mask was set to DMA_BIT_MAKS(32)
a decade ago by referencing SCSI/block subsystem, as a 32-bit
mask was good enough for most of the devices.
Now more and more drivers set dma_masks above DMA_BIT_MAKS(32)
while only a handful of them call dma_set_seg_boundary(). This
means that most drivers have a 4GB segmention boundary because
DMA API returns a 32-bit default value, though they might not
really have such a limit.
The default segment_boundary_mask should mean "no limit" since
the device doesn't explicitly set the mask. But a 32-bit mask
certainly limits those devices capable of 32+ bits addressing.
And this 32-bit boundary mask might result in a situation that
when dma-iommu maps a DMA buffer (size > 4GB), iommu_map_sg()
cuts the IOVA region into discontiguous pieces, and creates a
faulty IOVA mapping that overlaps some physical memory outside
the scatter list, which might lead to some random kernel panic
after DMA overwrites that faulty IOVA space.
So this patch sets default segment_boundary_mask to ULONG_MAX.
Signed-off-by: Nicolin Chen <nicoleotsuka@xxxxxxxxx>
---
include/linux/dma-mapping.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index 330ad58fbf4d..ff8cefe85f30 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -736,7 +736,7 @@ static inline unsigned long dma_get_seg_boundary(struct device *dev)
{
if (dev->dma_parms && dev->dma_parms->segment_boundary_mask)
return dev->dma_parms->segment_boundary_mask;
- return DMA_BIT_MASK(32);
+ return ULONG_MAX;
}
static inline int dma_set_seg_boundary(struct device *dev, unsigned long mask)