[PATCH v5 6/6] swiotlb: Remove pointless stride adjustment for allocations >= PAGE_SIZE

From: Will Deacon
Date: Wed Feb 28 2024 - 08:41:39 EST


For swiotlb allocations >= PAGE_SIZE, the slab search historically
adjusted the stride to avoid checking unaligned slots. However, this is
no longer needed now that the code around it has evolved and the
stride is calculated from the required alignment.

Either 'alloc_align_mask' is used to specify the allocation alignment or
the DMA 'min_align_mask' is used to align the allocation with 'orig_addr'.
At least one of these masks is always non-zero.

In light of that, remove the redundant (and slightly confusing) check.

Link: https://lore.kernel.org/r/SN6PR02MB4157089980E6FC58D5557BCED4572@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Reported-by: Michael Kelley <mhklinux@xxxxxxxxxxx>
Signed-off-by: Will Deacon <will@xxxxxxxxxx>
---
kernel/dma/swiotlb.c | 7 -------
1 file changed, 7 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index c381a7ed718f..0d8805569f5e 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -1006,13 +1006,6 @@ static int swiotlb_search_pool_area(struct device *dev, struct io_tlb_pool *pool
*/
stride = get_max_slots(max(alloc_align_mask, iotlb_align_mask));

- /*
- * For allocations of PAGE_SIZE or larger only look for page aligned
- * allocations.
- */
- if (alloc_size >= PAGE_SIZE)
- stride = umax(stride, PAGE_SHIFT - IO_TLB_SHIFT + 1);
-
spin_lock_irqsave(&area->lock, flags);
if (unlikely(nslots > pool->area_nslabs - area->used))
goto not_found;
--
2.44.0.rc1.240.g4c46232300-goog