[PATCH v2] vfio/type1: handle case where IOMMU does not support PAGE_SIZE size

From: Eric Auger
Date: Thu Oct 29 2015 - 13:50:11 EST


Current vfio_pgsize_bitmap code hides the supported IOMMU page
sizes smaller than PAGE_SIZE. As a result, in case the IOMMU
does not support PAGE_SIZE page, the alignment check on map/unmap
is done with larger page sizes, if any. This can fail although
mapping could be done with pages smaller than PAGE_SIZE.

This patch modifies vfio_pgsize_bitmap implementation so that,
in case the IOMMU supports page sizes smaller than PAGE_SIZE
we pretend PAGE_SIZE is supported and hide sub-PAGE_SIZE sizes.
That way the user will be able to map/unmap buffers whose size/
start address is aligned with PAGE_SIZE. Pinning code uses that
granularity while iommu driver can use the sub-PAGE_SIZE size
to map the buffer.

Signed-off-by: Eric Auger <eric.auger@xxxxxxxxxx>
Signed-off-by: Alex Williamson <alex.williamson@xxxxxxxxxx>
Acked-by: Will Deacon <will.deacon@xxxxxxx>

---

This was tested on AMD Seattle with 64kB page host. ARM MMU 401
currently expose 4kB, 2MB and 1GB page support. With a 64kB page host,
the map/unmap check is done against 2MB. Some alignment check fail
so VFIO_IOMMU_MAP_DMA fail while we could map using 4kB IOMMU page
size.

v1 -> v2:
- correct PAGE_HOST type in comment and commit msg
- Add Will's R-b

RFC -> PATCH v1:
- move all modifications in vfio_pgsize_bitmap following Alex'
suggestion to expose a fake PAGE_SIZE support
- restore WARN_ON's
---
drivers/vfio/vfio_iommu_type1.c | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 57d8c37..59d47cb 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -403,13 +403,26 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
{
struct vfio_domain *domain;
- unsigned long bitmap = PAGE_MASK;
+ unsigned long bitmap = ULONG_MAX;

mutex_lock(&iommu->lock);
list_for_each_entry(domain, &iommu->domain_list, next)
bitmap &= domain->domain->ops->pgsize_bitmap;
mutex_unlock(&iommu->lock);

+ /*
+ * In case the IOMMU supports page sizes smaller than PAGE_SIZE
+ * we pretend PAGE_SIZE is supported and hide sub-PAGE_SIZE sizes.
+ * That way the user will be able to map/unmap buffers whose size/
+ * start address is aligned with PAGE_SIZE. Pinning code uses that
+ * granularity while iommu driver can use the sub-PAGE_SIZE size
+ * to map the buffer.
+ */
+ if (bitmap & ~PAGE_MASK) {
+ bitmap &= PAGE_MASK;
+ bitmap |= PAGE_SIZE;
+ }
+
return bitmap;
}

--
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/