RE: [PATCH v3 00/12] Consolidate domain cache invalidation

From: Tian, Kevin
Date: Tue Apr 23 2024 - 05:07:28 EST


> From: Lu Baolu <baolu.lu@xxxxxxxxxxxxxxx>
> Sent: Tuesday, April 16, 2024 4:07 PM
>
> The IOMMU hardware cache needs to be invalidated whenever the
> mappings
> in the domain are changed. Currently, domain cache invalidation is
> scattered across different places, causing several issues:
>
> - IOMMU IOTLB Invalidation: This is done by iterating through the domain
> IDs of each domain using the following code:
>
> xa_for_each(&dmar_domain->iommu_array, i, info)
> iommu_flush_iotlb_psi(info->iommu, dmar_domain,
> start_pfn, nrpages,
> list_empty(&gather->freelist), 0);
>
> This code could theoretically cause a use-after-free problem because
> there's no lock to protect the "info" pointer within the loop.
>
> - Inconsistent Invalidation Methods: Different domain types implement
> their own cache invalidation methods, making the code difficult to
> maintain. For example, the DMA domain, SVA domain, and nested domain
> have similar cache invalidation code scattered across different files.
>
> - SVA Domain Inconsistency: The SVA domain implementation uses a
> completely different data structure to track attached devices compared
> to other domains. This creates unnecessary differences and, even
> worse, leads to duplicate IOTLB invalidation when an SVA domain is
> attached to devices belonging to a same IOMMU.
>
> - Nested Domain Dependency: The special overlap between a nested domain
> and its parent domain requires a dedicated parent_domain_flush()
> helper function to be called everywhere the parent domain's mapping
> changes.
>
> - Limited Debugging Support: There are currently no debugging aids
> available for domain cache invalidation.
>
> By consolidating domain cache invalidation into a common location, we
> can address the issues mentioned above and improve the code's
> maintainability and debuggability.
>

There are several small nits but overall this series looks good to me:

Reviewed-by: Kevin Tian <kevin.tian@xxxxxxxxx>