Re: [PATCH v7 11/11] iommu/vt-d: Add svm/sva invalidate function

From: Lu Baolu
Date: Sat Oct 26 2019 - 03:06:09 EST


Hi again,

On 10/26/19 10:40 AM, Lu Baolu wrote:
Hi,

On 10/25/19 3:27 PM, Tian, Kevin wrote:
From: Jacob Pan [mailto:jacob.jun.pan@xxxxxxxxxxxxxxx]
Sent: Friday, October 25, 2019 3:55 AM

When Shared Virtual Address (SVA) is enabled for a guest OS via
vIOMMU, we need to provide invalidation support at IOMMU API and
driver
level. This patch adds Intel VT-d specific function to implement
iommu passdown invalidate API for shared virtual address.

The use case is for supporting caching structure invalidation
of assigned SVM capable devices. Emulated IOMMU exposes queue
invalidation capability and passes down all descriptors from the guest
to the physical IOMMU.

specifically you may clarify that only invalidations related to
first-level page table is passed down, because it's guest
structure being bound to the first-level. other descriptors
are emulated or translated into other necessary operations.


The assumption is that guest to host device ID mapping should be
resolved prior to calling IOMMU driver. Based on the device handle,
host IOMMU driver can replace certain fields before submit to the
invalidation queue.

what is device ID? it's a bit confusing term here.


Signed-off-by: Jacob Pan <jacob.jun.pan@xxxxxxxxxxxxxxx>
Signed-off-by: Ashok Raj <ashok.raj@xxxxxxxxx>
Signed-off-by: Liu, Yi L <yi.l.liu@xxxxxxxxxxxxxxx>
---
 drivers/iommu/intel-iommu.c | 170
++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 170 insertions(+)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 5fab32fbc4b4..a73e76d6457a 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -5491,6 +5491,175 @@ static void
intel_iommu_aux_detach_device(struct iommu_domain *domain,
ÂÂÂÂÂ aux_domain_remove_dev(to_dmar_domain(domain), dev);
 }

+/*
+ * 2D array for converting and sanitizing IOMMU generic TLB granularity to
+ * VT-d granularity. Invalidation is typically included in the unmap
operation
+ * as a result of DMA or VFIO unmap. However, for assigned device where
guest
+ * could own the first level page tables without being shadowed by QEMU.
In
+ * this case there is no pass down unmap to the host IOMMU as a result of
unmap
+ * in the guest. Only invalidations are trapped and passed down.
+ * In all cases, only first level TLB invalidation (request with PASID) can be
+ * passed down, therefore we do not include IOTLB granularity for request
+ * without PASID (second level).
+ *
+ * For an example, to find the VT-d granularity encoding for IOTLB
+ * type and page selective granularity within PASID:
+ * X: indexed by iommu cache type
+ * Y: indexed by enum iommu_inv_granularity
+ * [IOMMU_CACHE_INV_TYPE_IOTLB][IOMMU_INV_GRANU_ADDR]
+ *
+ * Granu_map array indicates validity of the table. 1: valid, 0: invalid
+ *
+ */
+const static int
inv_type_granu_map[IOMMU_CACHE_INV_TYPE_NR][IOMMU_INV_GRAN
U_NR] = {
+ÂÂÂ /* PASID based IOTLB, support PASID selective and page selective */
+ÂÂÂ {0, 1, 1},
+ÂÂÂ /* PASID based dev TLBs, only support all PASIDs or single PASID */
+ÂÂÂ {1, 1, 0},

I forgot previous discussion. is it necessary to pass down dev TLB invalidation
requests? Can it be handled by host iOMMU driver automatically?

On host SVA, when a memory is unmapped, driver callback will invalidate
dev IOTLB explicitly. So I guess we need to pass down it for guest case.
This is also required for guest iova over 1st level usage as far as can
see.


Sorry, I confused guest vIOVA and guest vSVA. For guest vIOVA, no device
TLB invalidation pass down. But currently for guest vSVA, device TLB
invalidation is passed down. Perhaps we can avoid passing down dev TLB
flush just like what we are doing for guest IOVA.

Best regards,
baolu