From: Lu Baolu <baolu.lu@xxxxxxxxxxxxxxx>
Sent: Thursday, February 29, 2024 5:46 PM
+
+/*
+ * Cache invalidation for changes to a scalable-mode context table
+ * entry.
+ *
+ * Section 6.5.3.3 of the VT-d spec:
+ * - Device-selective context-cache invalidation;
+ * - Domain-selective PASID-cache invalidation to affected domains
+ * (can be skipped if all PASID entries were not-present);
+ * - Domain-selective IOTLB invalidation to affected domains;
the spec talks about domain-selective but the code actually does
global invalidation.
+ * - Global Device-TLB invalidation to affected functions.
+ *
+ * Note that RWBF (Required Write-Buffer Flushing) capability has
+ * been deprecated for scable mode. Section 11.4.2 of the VT-d spec:
+ *
+ * HRWBF: Hardware implementations reporting Scalable Mode Translation
+ * Support (SMTS) as Set also report this field as Clear.
RWBF info is a bit weird given existing code doesn't touch it
+ */
+static void sm_context_flush_caches(struct device *dev)
+{
+ struct device_domain_info *info = dev_iommu_priv_get(dev);
+ struct intel_iommu *iommu = info->iommu;
+
+ iommu->flush.flush_context(iommu, 0, PCI_DEVID(info->bus, info-
devfn),+ DMA_CCMD_MASK_NOBIT,
DMA_CCMD_DEVICE_INVL);
+ qi_flush_pasid_cache(iommu, 0, QI_PC_GLOBAL, 0);
+ iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
+ devtlb_invalidation_with_pasid(iommu, dev, IOMMU_NO_PASID);
+}
+
+static void context_entry_teardown_pasid_table(struct intel_iommu
*iommu,
+ struct context_entry *context)
+{
+ context_clear_entry(context);
+ if (!ecap_coherent(iommu->ecap))
+ clflush_cache_range(context, sizeof(*context));
this is __iommu_flush_cache(). You can use it throughout this and
the 2nd series.
+
+void intel_pasid_teardown_sm_context(struct device *dev)
+{
it's clearer to call it just intel_teardown_sm_context. pasid_table
is one field in the context entry. Having pasid leading is slightly
confusing.