diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.cUnfortunately this still causes the deadlock when unbind() flushes the
index 7cae631c1baa..33449523afbe 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -3174,3 +3174,24 @@ void iommu_detach_device_pasid(struct iommu_domain *domain,
iommu_group_put(group);
}
+
+struct iommu_domain *iommu_get_domain_for_dev_pasid(struct device *dev,
+ ioasid_t pasid)
+{
+ struct iommu_domain *domain;
+ struct iommu_group *group;
+
+ if (!pasid_valid(pasid))
+ return NULL;
+
+ group = iommu_group_get(dev);
+ if (!group)
+ return NULL;
+
+ mutex_lock(&group->mutex);
IOPF queue while holding the group mutex.
If we make this function private to IOPF, then we can get rid of this
mutex_lock(). It's OK because:
* xarray protects its internal state with RCU, so we can call
xa_load() outside the lock.
* The domain obtained from xa_load is finalized. Its content is valid
because xarray stores the domain using rcu_assign_pointer(), which has a
release memory barrier, which pairs with data dependencies in IOPF
(domain->sva_ioas etc).
We'll need to be careful about this when allowing other users to install
a fault handler. Should be fine as long as the handler and data are
installed before the domain is added to pasid_array.
* We know the domain is valid the whole time IOPF is using it, because
unbind() waits for pending faults.
We just need a comment explaining the last point, something like:
/*
* Safe to fetch outside the group mutex because:
* - xarray protects its internal state with RCU
* - the domain obtained is either NULL or fully formed
* - the IOPF work is the only caller and is flushed before the
* domain is freed.
*/
Thanks,
Jean
+ domain = xa_load(&group->pasid_array, pasid);
+ mutex_unlock(&group->mutex);
+ iommu_group_put(group);
+
+ return domain;
+}