[PATCH] vfio: If an IOMMU backend fails, keep looking

From: Alex Williamson
Date: Mon Feb 01 2016 - 18:56:37 EST


Consider an IOMMU to be an API rather than an implementation, we might
have multiple implementations supporting the same API, so try another
if one fails. The expectation here is that we'll really only have
one implementation per device type. For instance the existing type1
driver works with any PCI device where the IOMMU API is available. A
vGPU vendor may have a virtual PCI device which provides DMA isolation
and mapping through other mechanisms, but can re-use userspaces that
make use of the type1 VFIO IOMMU API. This allows that to work.

Signed-off-by: Alex Williamson <alex.williamson@xxxxxxxxxx>
---
drivers/vfio/vfio.c | 16 ++++++----------
1 file changed, 6 insertions(+), 10 deletions(-)

diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index ecca316..4cc961b 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -1080,30 +1080,26 @@ static long vfio_ioctl_set_iommu(struct vfio_container *container,
continue;
}

- /* module reference holds the driver we're working on */
- mutex_unlock(&vfio.iommu_drivers_lock);
-
data = driver->ops->open(arg);
if (IS_ERR(data)) {
ret = PTR_ERR(data);
module_put(driver->ops->owner);
- goto skip_drivers_unlock;
+ continue;
}

ret = __vfio_container_attach_groups(container, driver, data);
- if (!ret) {
- container->iommu_driver = driver;
- container->iommu_data = data;
- } else {
+ if (ret) {
driver->ops->release(data);
module_put(driver->ops->owner);
+ continue;
}

- goto skip_drivers_unlock;
+ container->iommu_driver = driver;
+ container->iommu_data = data;
+ break;
}

mutex_unlock(&vfio.iommu_drivers_lock);
-skip_drivers_unlock:
up_write(&container->group_lock);

return ret;