On 21-02-07 18:40:36, Keqian Zhu wrote:
Hi Yi,I am not familiar with ARM smmu. :) So I want to clarify if the block
On 2021/2/7 17:56, Yi Sun wrote:
Hi,Indeed, a generic interface name is better.
On 21-01-28 23:17:41, Keqian Zhu wrote:
[...]
+static void vfio_dma_dirty_log_start(struct vfio_iommu *iommu,
+ struct vfio_dma *dma)
+{
+ struct vfio_domain *d;
+
+ list_for_each_entry(d, &iommu->domain_list, next) {
+ /* Go through all domain anyway even if we fail */
+ iommu_split_block(d->domain, dma->iova, dma->size);
+ }
+}
This should be a switch to prepare for dirty log start. Per Intel
Vtd spec, there is SLADE defined in Scalable-Mode PASID Table Entry.
It enables Accessed/Dirty Flags in second-level paging entries.
So, a generic iommu interface here is better. For Intel iommu, it
enables SLADE. For ARM, it splits block.
The vendor iommu driver plays vendor's specific actions to start dirty log, and Intel iommu and ARM smmu may differ. Besides, we may add more actions in ARM smmu driver in future.
One question: Though I am not familiar with Intel iommu, I think it also should split block mapping besides enable SLADE. Right?
in smmu is big page, e.g. 2M page? Intel Vtd manages the memory per
page, 4KB/2MB/1GB.
There are two ways to manage dirty pages.
1. Keep default granularity. Just set SLADE to enable the dirty track.
2. Split big page to 4KB to get finer granularity.
But question about the second solution is if it can benefit the user
space, e.g. live migration. If my understanding about smmu block (i.e.
the big page) is correct, have you collected some performance data to
prove that the split can improve performance? Thanks!
Thanks,_______________________________________________
Keqian
iommu mailing list
iommu@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/iommu