Re: [PATCH v5 2/3] iommu/arm-smmu-v3: Fix CMDQ timeout warning

From: Jacob Pan

Date: Fri Dec 12 2025 - 15:05:34 EST


Hi Will,

On Wed, 10 Dec 2025 12:16:19 +0900
Will Deacon <will@xxxxxxxxxx> wrote:

> On Mon, Dec 08, 2025 at 01:28:56PM -0800, Jacob Pan wrote:
> > @@ -781,12 +771,21 @@ static int arm_smmu_cmdq_issue_cmdlist(struct
> > arm_smmu_device *smmu, local_irq_save(flags);
> > llq.val = READ_ONCE(cmdq->q.llq.val);
> > do {
> > + struct arm_smmu_queue_poll qp;
> > u64 old;
> >
> > + /*
> > + * Poll without WFE because:
> > + * 1) Running out of space should be rare. Power
> > saving is not
> > + * an issue.
> > + * 2) WFE depends on queue full break events,
> > which occur only
> > + * when the queue is full, but here we’re
> > polling for
> > + * sufficient space, not just queue full
> > condition.
> > + */
>
> I don't think this is reasonable; we should be able to use wfe
> instead of polling on hardware that supports it and that is an
> important power-saving measure in mobile parts.
>
After an offline discussion, I now understand that WFE essentially
stops the CPU clock, making energy savings almost always beneficial.
This differs from certain C-state or idle-state transitions, where the
energy-saving break-even point depends on how long the CPU remains
idle. Previously, I assumed power savings were not guaranteed due to
the unpredictability of wake events (e.g., timing relative to scheduler
ticks or queue-full conditions).

So I agree we should leverage WFE as much as we could here.

> If this is really an issue, we could take a spinlock around the
> command-queue allocation loop for hardware with small queue sizes
> relative to the number of CPUs, but it's not clear to me that we need
> to do anything at all. I'm happy with the locking change in patch 3.
>
> If we apply _only_ the locking change in the next patch, does that
> solve the reported problem for you?
Yes, please take #3. It should take care of the functional problem.

Thanks,

Jacob