Re: [PATCH v1 2/2] swiotlb: Fix slot alignment checks

From: Dexuan-Linux Cui
Date: Tue Apr 04 2023 - 15:55:46 EST


On Tue, Mar 21, 2023 at 1:37 AM Petr Tesarik
<petrtesarik@xxxxxxxxxxxxxxx> wrote:
>
> From: Petr Tesarik <petr.tesarik.ext@xxxxxxxxxx>
>
> Explicit alignment and page alignment are used only to calculate
> the stride, not when checking actual slot physical address.
>
> Originally, only page alignment was implemented, and that worked,
> because the whole SWIOTLB is allocated on a page boundary, so
> aligning the start index was sufficient to ensure a page-aligned
> slot.
>
> When Christoph Hellwig added support for min_align_mask, the index
> could be incremented in the search loop, potentially finding an
> unaligned slot if minimum device alignment is between IO_TLB_SIZE
> and PAGE_SIZE. The bug could go unnoticed, because the slot size
> is 2 KiB, and the most common page size is 4 KiB, so there is no
> alignment value in between.
>
> IIUC the intention has been to find a slot that conforms to all
> alignment constraints: device minimum alignment, an explicit
> alignment (given as function parameter) and optionally page
> alignment (if allocation size is >= PAGE_SIZE). The most
> restrictive mask can be trivially computed with logical AND. The
> rest can stay.
>
> Fixes: 1f221a0d0dbf ("swiotlb: respect min_align_mask")
> Fixes: e81e99bacc9f ("swiotlb: Support aligned swiotlb buffers")
> Signed-off-by: Petr Tesarik <petr.tesarik.ext@xxxxxxxxxx>
> ---
> kernel/dma/swiotlb.c | 16 ++++++++++------
> 1 file changed, 10 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 3856e2b524b4..5b919ef832b6 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -634,22 +634,26 @@ static int swiotlb_do_find_slots(struct device *dev, int area_index,
> BUG_ON(!nslots);
> BUG_ON(area_index >= mem->nareas);
>
> + /*
> + * For allocations of PAGE_SIZE or larger only look for page aligned
> + * allocations.
> + */
> + if (alloc_size >= PAGE_SIZE)
> + iotlb_align_mask &= PAGE_MASK;
> + iotlb_align_mask &= alloc_align_mask;
> +
> /*
> * For mappings with an alignment requirement don't bother looping to
> - * unaligned slots once we found an aligned one. For allocations of
> - * PAGE_SIZE or larger only look for page aligned allocations.
> + * unaligned slots once we found an aligned one.
> */
> stride = (iotlb_align_mask >> IO_TLB_SHIFT) + 1;
> - if (alloc_size >= PAGE_SIZE)
> - stride = max(stride, stride << (PAGE_SHIFT - IO_TLB_SHIFT));
> - stride = max(stride, (alloc_align_mask >> IO_TLB_SHIFT) + 1);
>
> spin_lock_irqsave(&area->lock, flags);
> if (unlikely(nslots > mem->area_nslabs - area->used))
> goto not_found;
>
> slot_base = area_index * mem->area_nslabs;
> - index = wrap_area_index(mem, ALIGN(area->index, stride));
> + index = area->index;
>
> for (slots_checked = 0; slots_checked < mem->area_nslabs; ) {
> slot_index = slot_base + index;
> --
> 2.39.2
>

Hi Petr, this patch has gone into the mainline:
0eee5ae10256 ("swiotlb: fix slot alignment checks")

Somehow it breaks Linux VMs on Hyper-V: a regular VM with
swiotlb=force or a confidential VM (which uses swiotlb) fails to boot.
If I revert this patch, everything works fine.

Cc'd Tianyu/Michael and the Hyper-V list.

Thanks,
Dexuan