Re: [PATCH v1 2/2] iommu/arm-smmu-v3: Recover ATC invalidate timeouts

From: Nicolin Chen

Date: Thu Mar 05 2026 - 18:31:30 EST


On Thu, Mar 05, 2026 at 01:06:24PM -0800, Nicolin Chen wrote:
> On Thu, Mar 05, 2026 at 03:24:30PM +0000, Robin Murphy wrote:
> > On 2026-03-05 5:21 am, Nicolin Chen wrote:
> > > Though it'd be ideal to block it immediately in the ISR, it cannot be done
> > > because an STE update would require another CFIG_STE command that couldn't
> > > finish in the context of an ISR handling a CMDQ error.
> >
> > Why not? As soon as we've acked GERRORN.CMDQ_ERR, command consumption will
> > resume and we're free to do whatever we fancy. Admittedly this probably
> > represents more work than we *want* to be doing in the SMMU's IRQ handler
> > (arguably even in a thread, since all the PCI housekeeping isn't really the
> > SMMU driver's own problem), but I would say the workqueue is a definite
> > design choice, not a functional requirement.
>
> Hmm, you are right, after writel(gerror, ARM_SMMU_GERRORN), it
> would be doable.
>
> Though SID->pdev conversion requires streams_mutex, which can't
> happen in ISR?

I forgot that (if we don't mind the core-level domain attachment
and the driver-level ATS state being temporarily out of sync with
the physical STE) SID alone was enough for a surgical STE update
in the ISR to unset EATS.

Thanks
Nicolin

> > > +static void arm_smmu_atc_recovery_worker(struct work_struct *work)
> > > +{
> > > + struct arm_smmu_atc_recovery_param *param =
> > > + container_of(work, struct arm_smmu_atc_recovery_param, work);
> > > + struct pci_dev *pdev;
> > > +
> > > + scoped_guard(mutex, &param->smmu->streams_mutex) {
> > > + struct arm_smmu_master *master;
> > > +
> > > + master = arm_smmu_find_master(param->smmu, param->sid);
> >
> > The only thing SMMU-specific about this seems to be the use of
> > arm_smmu_find_master() to resolve the device, which could just as well be
> > done upon submission anyway - why isn't this a generic IOMMU/IOPF mechanism?
>
> You mean treating this as a page fault? That's an interesting idea.
>
> So, the IOMMU-level fence could be done in arm_smmu_page_response().
>
> > > +static int arm_smmu_sched_atc_recovery(struct arm_smmu_device *smmu, u32 sid)
> > > +{
> > > + struct arm_smmu_atc_recovery_param *param;
> > > +
> > > + param = kzalloc_obj(*param, GFP_ATOMIC);
> > > + if (!param)
> > > + return -ENOMEM;
> > > + param->smmu = smmu;
> > > + param->sid = sid;
> > > +
> > > + INIT_WORK(&param->work, arm_smmu_atc_recovery_worker);
> > > + queue_work(system_unbound_wq, &param->work);
> >
> > Might it make more sense to have a single work item associated with the
> > list_head and use the latter as an actual queue, such that the work runs
> > until the list is empty, then here at submisison time we do the list_add()
> > and schedule_work()? That could perhaps even be a global queue, since ATS
> > timeouts can hardly be expected to be a highly-contended high-perfoamnce
> > concern.
>
> That sounds like the IOPF implementation. Maybe inventing another
> IOMMU_FAULT_ATC_TIMEOUT to reuse the existing infrastructure would
> make things cleaner.
>
> > Right now it seems the list is barely doing anything - a "deduplication"
> > mechanism that only works if multiple resets for the same device happen to
> > have their work scheduled concurrently seems pretty ineffective.
>
> From my testing results, it does effectively block duplications, so
> I think it's somehow meaningful.
>
> If I wasn't wrong, the duplicated ATC timeout might come from this
> exact reset thread, as it did a CMDQ_OP_CFGI_STE while the CONS was
> still pointing to previous CMDQ_OP_ATC_INV.
>
> IIUIC, the IOPF queue doesn't expect duplicated PRI events. So, it
> might invoke the handler twice if there is a duplication. The core
> handling list_add() might need a deduplication.
>
> > > @@ -456,6 +555,27 @@ void __arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu,
> > > * not to touch any of the shadow cmdq state.
> > > */
> > > queue_read(cmd, Q_ENT(q, cons), q->ent_dwords);
> > > +
> > > + if (idx == CMDQ_ERR_CERROR_ATC_INV_IDX) {
> > > + /*
> > > + * Since commands can be issued in batch making it difficult to
> > > + * identify which CMDQ_OP_ATC_INV actually timed out, the driver
> > > + * must ensure only CMDQ_OP_ATC_INV commands for the same device
> > > + * can be batched.
> >
> > But this *is* "the driver" - arm_smmu_atc_inv_domain() is literally further
> > down this same C file, and does not do what this comment is saying it must
> > do, so how are you expecting this to work correctly?
>
> Oh, that's a big miss..
>
> I imaged these changes to be based on the arm_smmu_invs series,
> where it could break the ATC in the batching function. Here, it
> should have made some change to arm_smmu_atc_inv_domain().
>
> Thanks
> Nicolin