[PATCH] xen-swiotlb: exchange memory with Xen only when pages are contiguous

From: Joe Jin
Date: Tue Oct 23 2018 - 23:09:16 EST


Commit 4855c92dbb7 "xen-swiotlb: fix the check condition for
xen_swiotlb_free_coherent" only fixed memory address check condition
on xen_swiotlb_free_coherent(), when memory was not physically
contiguous and tried to exchanged with Xen via
xen_destroy_contiguous_region it will lead kernel panic.

The correct check condition should be memory is in DMA area and
physically contiguous.

Thank you Boris for pointing it out.

Signed-off-by: Joe Jin <joe.jin@xxxxxxxxxx>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Cc: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
Cc: Christoph Helwig <hch@xxxxxx>
Cc: Dongli Zhang <dongli.zhang@xxxxxxxxxx>
Cc: John Sobecki <john.sobecki@xxxxxxxxxx>
---
drivers/xen/swiotlb-xen.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index f5c1af4ce9ab..aed92fa019f9 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -357,8 +357,8 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
/* Convert the size to actually allocated. */
size = 1UL << (order + XEN_PAGE_SHIFT);

- if (((dev_addr + size - 1 <= dma_mask)) ||
- range_straddles_page_boundary(phys, size))
+ if ((dev_addr + size - 1 <= dma_mask) &&
+ !range_straddles_page_boundary(phys, size))
xen_destroy_contiguous_region(phys, order);

xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs);
--
2.17.1 (Apple Git-112)