On Tue, May 26, 2020 at 07:49:07PM +0800, Zhangfei Gao wrote:I do not notice the difference when compared fixup_iommu and fixup_final via get_jiffies_64,
Some platform devices appear as PCI but are actually on the AMBA bus,Is this slowdown significant? We already iterate over every device
and they need fixup in drivers/pci/quirks.c handling iommu_fwnode.
Here introducing PCI_FIXUP_IOMMU, which is called after iommu_fwnode
is allocated, instead of reusing PCI_FIXUP_FINAL since it will slow
down iommu probing as all devices in fixup final list will be
reprocessed, suggested by Joerg, [1]
when applying PCI_FIXUP_FINAL quirks, so if we used the existing
PCI_FIXUP_FINAL, we wouldn't be adding a new loop. We would only be
adding two more iterations to the loop in pci_do_fixups() that tries
to match quirks against the current device. I doubt that would be a
measurable slowdown.
Got it, thanks Bjorn.
For example:If you reference these in the commit logs, please use lore.kernel.org
Hisilicon platform device need fixup in
drivers/pci/quirks.c handling fwspec->can_stall, which is introduced in [2]
+static void quirk_huawei_pcie_sva(struct pci_dev *pdev)
+{
+ struct iommu_fwspec *fwspec;
+
+ pdev->eetlp_prefix_path = 1;
+ fwspec = dev_iommu_fwspec_get(&pdev->dev);
+ if (fwspec)
+ fwspec->can_stall = 1;
+}
+
+DECLARE_PCI_FIXUP_IOMMU(PCI_VENDOR_ID_HUAWEI, 0xa250, quirk_huawei_pcie_sva);
+DECLARE_PCI_iFIXUP_IOMMU(PCI_VENDOR_ID_HUAWEI, 0xa251, quirk_huawei_pcie_sva);
[1] https://www.spinics.net/lists/iommu/msg44591.html
[2] https://www.spinics.net/lists/linux-pci/msg94559.html
links instead of spinics.