On 2024/8/17 0:38, Jacob Pan wrote:
On Thu, 15 Aug 2024 14:52:21 +0800No. All related caches are flushed before function return. A domain can
Tina Zhang <tina.zhang@xxxxxxxxx> wrote:
@@ -270,7 +343,8 @@ static void cache_tag_flush_iotlb(structIf I understand this correctly, IOTLB flush maybe deferred until the
dmar_domain *domain, struct cache_tag * u64 type = DMA_TLB_PSI_FLUSH;
if (domain->use_first_level) {
- qi_flush_piotlb(iommu, tag->domain_id, tag->pasid,
addr, pages, ih);
+ qi_batch_add_piotlb(iommu, tag->domain_id,
tag->pasid, addr,
+ pages, ih, domain->qi_batch);
return;
}
@@ -287,7 +361,8 @@ static void cache_tag_flush_iotlb(struct
dmar_domain *domain, struct cache_tag * }
if (ecap_qis(iommu->ecap))
- qi_flush_iotlb(iommu, tag->domain_id, addr | ih,
mask, type);
+ qi_batch_add_iotlb(iommu, tag->domain_id, addr | ih,
mask, type,
+ domain->qi_batch);
batch array is full, right? If so, is there a security gap where
callers think the mapping is gone after the call returns?
have multiple cache tags. Previously, we sent individual cache
invalidation requests to hardware. This change combines all necessary
invalidation requests into a single batch and raise them to hardware
together to make it more efficient.