[RFC v3 8/8] vfio/type1: Add vSVA support for IOMMU-backed mdevs
From: Liu, Yi L
Date: Wed Jan 29 2020 - 07:06:48 EST
From: Liu Yi L <yi.l.liu@xxxxxxxxx>
Recent years, mediated device pass-through framework (e.g. vfio-mdev)
are used to achieve flexible device sharing across domains (e.g. VMs).
Also there are hardware assisted mediated pass-through solutions from
platform vendors. e.g. Intel VT-d scalable mode which supports Intel
Scalable I/O Virtualization technology. Such mdevs are called IOMMU-
backed mdevs as there are IOMMU enforced DMA isolation for such mdevs.
In kernel, IOMMU-backed mdevs are exposed to IOMMU layer by aux-domain
concept, which means mdevs are protected by an iommu domain which is
aux-domain of its physical device. Details can be found in the KVM
presentation from Kevin Tian. IOMMU-backed equals to IOMMU-capable.
https://events19.linuxfoundation.org/wp-content/uploads/2017/12/\
Hardware-Assisted-Mediated-Pass-Through-with-VFIO-Kevin-Tian-Intel.pdf
This patch supports NESTING IOMMU for IOMMU-backed mdevs by figuring
out the physical device of an IOMMU-backed mdev and then invoking IOMMU
requests to IOMMU layer with the physical device and the mdev's aux
domain info.
With this patch, vSVA (Virtual Shared Virtual Addressing) can be used
on IOMMU-backed mdevs.
Cc: Kevin Tian <kevin.tian@xxxxxxxxx>
CC: Jacob Pan <jacob.jun.pan@xxxxxxxxxxxxxxx>
CC: Jun Tian <jun.j.tian@xxxxxxxxx>
Cc: Alex Williamson <alex.williamson@xxxxxxxxxx>
Cc: Eric Auger <eric.auger@xxxxxxxxxx>
Cc: Jean-Philippe Brucker <jean-philippe.brucker@xxxxxxx>
Signed-off-by: Liu Yi L <yi.l.liu@xxxxxxxxx>
---
drivers/vfio/vfio_iommu_type1.c | 23 ++++++++++++++++++++---
1 file changed, 20 insertions(+), 3 deletions(-)
diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 2168318..5aea355 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -127,6 +127,7 @@ struct vfio_regions {
struct domain_capsule {
struct iommu_domain *domain;
+ struct vfio_group *group;
void *data;
};
@@ -143,6 +144,7 @@ static int vfio_iommu_for_each_dev(struct vfio_iommu *iommu,
list_for_each_entry(d, &iommu->domain_list, next) {
dc.domain = d->domain;
list_for_each_entry(g, &d->group_list, next) {
+ dc.group = g;
ret = iommu_group_for_each_dev(g->iommu_group,
&dc, fn);
if (ret)
@@ -2372,7 +2374,12 @@ static int vfio_bind_gpasid_fn(struct device *dev, void *data)
struct iommu_gpasid_bind_data *gbind_data =
(struct iommu_gpasid_bind_data *) dc->data;
- return iommu_sva_bind_gpasid(dc->domain, dev, gbind_data);
+ if (dc->group->mdev_group)
+ return iommu_sva_bind_gpasid(dc->domain,
+ vfio_mdev_get_iommu_device(dev), gbind_data);
+ else
+ return iommu_sva_bind_gpasid(dc->domain,
+ dev, gbind_data);
}
static int vfio_unbind_gpasid_fn(struct device *dev, void *data)
@@ -2381,7 +2388,12 @@ static int vfio_unbind_gpasid_fn(struct device *dev, void *data)
struct iommu_gpasid_bind_data *gbind_data =
(struct iommu_gpasid_bind_data *) dc->data;
- return iommu_sva_unbind_gpasid(dc->domain, dev,
+ if (dc->group->mdev_group)
+ return iommu_sva_unbind_gpasid(dc->domain,
+ vfio_mdev_get_iommu_device(dev),
+ gbind_data->hpasid);
+ else
+ return iommu_sva_unbind_gpasid(dc->domain, dev,
gbind_data->hpasid);
}
@@ -2454,7 +2466,12 @@ static int vfio_cache_inv_fn(struct device *dev, void *data)
struct iommu_cache_invalidate_info *cache_inv_info =
(struct iommu_cache_invalidate_info *) dc->data;
- return iommu_cache_invalidate(dc->domain, dev, cache_inv_info);
+ if (dc->group->mdev_group)
+ return iommu_cache_invalidate(dc->domain,
+ vfio_mdev_get_iommu_device(dev), cache_inv_info);
+ else
+ return iommu_cache_invalidate(dc->domain,
+ dev, cache_inv_info);
}
static long vfio_iommu_type1_ioctl(void *iommu_data,
--
2.7.4