Re: [PATCH v4 1/2] iommu/arm-smmu-v3: Fix CMDQ timeout warning
From: Jacob Pan
Date: Sun Nov 30 2025 - 18:06:26 EST
Hi Will,
On Tue, 25 Nov 2025 17:19:16 +0000
Will Deacon <will@xxxxxxxxxx> wrote:
> On Fri, Nov 14, 2025 at 09:17:17AM -0800, Jacob Pan wrote:
> > While polling for n spaces in the cmdq, the current code instead
> > checks if the queue is full. If the queue is almost full but not
> > enough space (<n), then the CMDQ timeout warning is never triggered
> > even if the polling has exceeded timeout limit.
> >
> > The existing arm_smmu_cmdq_poll_until_not_full() doesn't fit
> > efficiently nor ideally to the only caller
> > arm_smmu_cmdq_issue_cmdlist():
> > - It uses a new timer at every single call, which fails to limit
> > to the preset ARM_SMMU_POLL_TIMEOUT_US per issue.
> > - It has a redundant internal queue_full(), which doesn't detect
> > whether there is a enough space for number of n commands.
> >
> > This patch polls for the availability of exact space instead of
> > full and emit timeout warning accordingly.
> >
> > Fixes: 587e6c10a7ce ("iommu/arm-smmu-v3: Reduce contention during
> > command-queue insertion") Co-developed-by: Yu Zhang
> > <zhangyu1@xxxxxxxxxxxxxxxxxxx> Signed-off-by: Yu Zhang
> > <zhangyu1@xxxxxxxxxxxxxxxxxxx> Signed-off-by: Jacob Pan
> > <jacob.pan@xxxxxxxxxxxxxxxxxxx>
>
> I'm assuming you're seeing problems with an emulated command queue?
> Any chance you could make that bigger?
>
This is not related to queue size, but rather a logic issue when
anytime queue is nearly full.
> > @@ -804,12 +794,13 @@ int arm_smmu_cmdq_issue_cmdlist(struct
> > arm_smmu_device *smmu, local_irq_save(flags);
> > llq.val = READ_ONCE(cmdq->q.llq.val);
> > do {
> > + struct arm_smmu_queue_poll qp;
> > u64 old;
> >
> > + queue_poll_init(smmu, &qp);
> > while (!queue_has_space(&llq, n + sync)) {
> > local_irq_restore(flags);
> > - if
> > (arm_smmu_cmdq_poll_until_not_full(smmu, cmdq, &llq))
> > - dev_err_ratelimited(smmu->dev,
> > "CMDQ timeout\n");
> > + arm_smmu_cmdq_poll(smmu, cmdq, &llq, &qp);
> >
>
> Isn't this broken for wfe-based polling? The SMMU only generates the
> wake-up event when the queue becomes non-full.
I don't see this is a problem since any interrupts such as scheduler
tick can be a break evnt for WFE, no?
I have also tested this with WFE on BM with no issues. HyperV VM does
not support WFE.