Re: [PATCH v2 2/2] iommu/vt-d: Flush dev-IOTLB only when PCIe device is accessible in scalable mode

From: Samiullah Khawaja

Date: Thu Feb 26 2026 - 20:45:14 EST


On Tue, Feb 10, 2026 at 05:39:12PM -0600, Bjorn Helgaas wrote:
[+cc Alex, beginning of thread:
https://lore.kernel.org/all/20251211035946.2071-1-guojinhui.liam@xxxxxxxxxxxxx/]

On Wed, Dec 24, 2025 at 03:08:49AM +0000, Tian, Kevin wrote:
+Bjorn for guidance.

Sorry for the late response.

quick context - previously intel-iommu driver fixed a lockup issue in surprise
removal, by checking pci_dev_is_disconnected(). But Jinhui still observed the
lockup issue in a setup where no interrupt is raised to pci core upon surprise
removal (so pci_dev_is_disconnected() is false), hence suggesting to replace
the check with pci_device_is_present() instead.

I think checking pci_dev_is_disconnected() or pci_device_is_present()
in drivers is usually bad practice because it's always racy, as you've
already pointed out.

I don't think it's possible to avoid Invalidate Completion Timeouts in
general, so I think the real solution is to figure out how to
gracefully handle them without running into the lockup detection.

I assume the lockup is the loop in qi_submit_sync() where we wait for
QI_DONE with interrupts disabled. Maybe we need something like
watchdog_hardlockup_touch_cpu() there, along with a timeout in that
loop?

Looking at the AMD IOMMU driver, it has 100ms timeout in wait_on_sem()
that basically waits for the completion until the timeout occurs. Is
this the expected behaviour as per specification, or should the IOMMU
wait for the Invalidation Completion Timeout?

Reading the specs (notes of PCIe r7.0, sec 10.1.1, Figure 10-4), it
seems the device is allowed to send translated TLPs, targetting the
address regions being invalidated, until the Invalidation Completion
Timeout (which could be 1-2 minutes as Bjorn shared below).


The PCIe r7.0, sec 10.3.1, implementation note suggests the timeout
might be in the 1-2 minute range, which is pretty extreme, but if we
can at least handle timeouts gracefully, we can think about ways to
make them less likely, e.g., by coordinating with FLR and VFIO detach
(maybe the sort of thing Alex alluded to at
https://lore.kernel.org/all/20251223153534.0968cc15.alex@xxxxxxxxxxx).

Bjorn, is it a common practice to fix it directly/only in drivers or should the
pci core be notified e.g. simulating a late removal event? By searching the
code looks it's the former, but better confirm with you before picking this
fix...

I don't know exactly what it would look like to simulate a late
removal event, but it sounds like some kind of complicated
infrastructure that would still be only a 90% solution, which I
wouldn't recommend.

> From: Baolu Lu <baolu.lu@xxxxxxxxxxxxxxx>
> Sent: Tuesday, December 23, 2025 12:06 PM
>
> On 12/22/25 19:19, Jinhui Guo wrote:
> > On Thu, Dec 18, 2025 08:04:20AM +0000, Tian, Kevin wrote:
> >>> From: Jinhui Guo<guojinhui.liam@xxxxxxxxxxxxx>
> >>> Sent: Thursday, December 11, 2025 12:00 PM
> >>>
> >>> Commit 4fc82cd907ac ("iommu/vt-d: Don't issue ATS Invalidation
> >>> request when device is disconnected") relies on
> >>> pci_dev_is_disconnected() to skip ATS invalidation for
> >>> safely-removed devices, but it does not cover link-down caused
> >>> by faults, which can still hard-lock the system.
> >> According to the commit msg it actually tries to fix the hard lockup
> >> with surprise removal. For safe removal the device is not removed
> >> before invalidation is done:
> >>
> >> "
> >> For safe removal, device wouldn't be removed until the whole software
> >> handling process is done, it wouldn't trigger the hard lock up issue
> >> caused by too long ATS Invalidation timeout wait.
> >> "
> >>
> >> Can you help articulate the problem especially about the part
> >> 'link-down caused by faults"? What are those faults? How are
> >> they different from the said surprise removal in the commit
> >> msg to not set pci_dev_is_disconnected()?
> >>
> > Hi, kevin, sorry for the delayed reply.
> >
> > A normal or surprise removal of a PCIe device on a hot-plug port normally
> > triggers an interrupt from the PCIe switch.
> >
> > We have, however, observed cases where no interrupt is generated when
> the
> > device suddenly loses its link; the behaviour is identical to setting the
> > Link Disable bit in the switch’s Link Control register (offset 10h). Exactly
> > what goes wrong in the LTSSM between the PCIe switch and the endpoint
> remains
> > unknown.
>
> In this scenario, the hardware has effectively vanished, yet the device
> driver remains bound and the IOMMU resources haven't been released. I’m
> just curious if this stale state could trigger issues in other places
> before the kernel fully realizes the device is gone? I’m not objecting
> to the fix. I'm just interested in whether this 'zombie' state creates
> risks elsewhere.
>