On Tue, Sep 06, 2022 at 08:44:52PM +0800, Lu Baolu wrote:
+static void intel_iommu_remove_dev_pasid(struct device *dev, ioasid_t pasid)
+{
+ struct intel_iommu *iommu = device_to_iommu(dev, NULL, NULL);
+ struct iommu_domain *domain;
+
+ /* Domain type specific cleanup: */
+ domain = iommu_get_domain_for_dev_pasid(dev, pasid, 0);
+ if (domain) {
+ switch (domain->type) {
+ case IOMMU_DOMAIN_SVA:
+ intel_svm_remove_dev_pasid(dev, pasid);
+ break;
+ default:
+ /* should never reach here */
+ WARN_ON(1);
+ break;
This is eventually going to need a lot more cleaning up to split out
the PASID from the SVM stuff.
SVA should *only* be a set of predefined handlers (in the core code!)
for the generic PRI mechanism, it shouldn't be entangled deeply into
PASID or the drivers like this.
When we get done with this, the flow should have the core code attach
a SVA domain to a PASID with PRI enabled and the core code should
supply a generic PRI implementation that does the mmu_notifier
stuff.
Also, stuff like this:
/* We mandate that no page faults may be outstanding
* for the PASID when intel_svm_unbind_mm() is called.
* If that is not obeyed, subtle errors will happen.
* Let's make them less subtle... */
Are going to be problematic for VFIO as well. In a VFIO world the
entire RID and its entire PASID table has to be owned by VFIO and
never shared - so these sequencing issues should be solvable.
But this is all for further series..
Reviewed-by: Jason Gunthorpe <jgg@xxxxxxxxxx>