RE: [PATCH v1] PCI: imx6: Add force_suspend flag to override L1SS suspend skip

From: Hongxing Zhu

Date: Tue Apr 07 2026 - 22:38:46 EST


> -----Original Message-----
> From: mani@xxxxxxxxxx <mani@xxxxxxxxxx>
> Sent: 2026年4月7日 15:24
> To: Hongxing Zhu <hongxing.zhu@xxxxxxx>
> Cc: Bjorn Helgaas <helgaas@xxxxxxxxxx>; Frank Li <frank.li@xxxxxxx>;
> jingoohan1@xxxxxxxxx; l.stach@xxxxxxxxxxxxxx; lpieralisi@xxxxxxxxxx;
> kwilczynski@xxxxxxxxxx; robh@xxxxxxxxxx; bhelgaas@xxxxxxxxxx;
> s.hauer@xxxxxxxxxxxxxx; kernel@xxxxxxxxxxxxxx; festevam@xxxxxxxxx;
> linux-pci@xxxxxxxxxxxxxxx; linux-arm-kernel@xxxxxxxxxxxxxxxxxxx;
> imx@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; stable@xxxxxxxxxxxxxxx
> Subject: Re: [PATCH v1] PCI: imx6: Add force_suspend flag to override L1SS
> suspend skip
>
> On Tue, Apr 07, 2026 at 03:31:57AM +0000, Hongxing Zhu wrote:
> > > -----Original Message-----
> > > From: mani@xxxxxxxxxx <mani@xxxxxxxxxx>
> > > Sent: 2026年4月4日 1:03
> > > To: Hongxing Zhu <hongxing.zhu@xxxxxxx>
> > > Cc: Bjorn Helgaas <helgaas@xxxxxxxxxx>; Frank Li <frank.li@xxxxxxx>;
> > > jingoohan1@xxxxxxxxx; l.stach@xxxxxxxxxxxxxx; lpieralisi@xxxxxxxxxx;
> > > kwilczynski@xxxxxxxxxx; robh@xxxxxxxxxx; bhelgaas@xxxxxxxxxx;
> > > s.hauer@xxxxxxxxxxxxxx; kernel@xxxxxxxxxxxxxx; festevam@xxxxxxxxx;
> > > linux-pci@xxxxxxxxxxxxxxx; linux-arm-kernel@xxxxxxxxxxxxxxxxxxx;
> > > imx@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx;
> > > stable@xxxxxxxxxxxxxxx
> > > Subject: Re: [PATCH v1] PCI: imx6: Add force_suspend flag to
> > > override L1SS suspend skip
> > >
> > > On Tue, Mar 24, 2026 at 02:01:58AM +0000, Hongxing Zhu wrote:
> > > > > -----Original Message-----
> > > > > From: Bjorn Helgaas <helgaas@xxxxxxxxxx>
> > > > > Sent: 2026年3月24日 6:09
> > > > > To: Hongxing Zhu <hongxing.zhu@xxxxxxx>
> > > > > Cc: Frank Li <frank.li@xxxxxxx>; jingoohan1@xxxxxxxxx;
> > > > > l.stach@xxxxxxxxxxxxxx; lpieralisi@xxxxxxxxxx;
> > > > > kwilczynski@xxxxxxxxxx; mani@xxxxxxxxxx; robh@xxxxxxxxxx;
> > > > > bhelgaas@xxxxxxxxxx; s.hauer@xxxxxxxxxxxxxx;
> > > > > kernel@xxxxxxxxxxxxxx; festevam@xxxxxxxxx;
> > > > > linux-pci@xxxxxxxxxxxxxxx; linux-arm-kernel@xxxxxxxxxxxxxxxxxxx;
> > > > > imx@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx;
> > > > > stable@xxxxxxxxxxxxxxx
> > > > > Subject: Re: [PATCH v1] PCI: imx6: Add force_suspend flag to
> > > > > override L1SS suspend skip
> > > > >
> > > > > On Wed, Mar 18, 2026 at 02:55:45AM +0000, Hongxing Zhu wrote:
> > > > > > > -----Original Message-----
> > > > > > > From: Bjorn Helgaas <helgaas@xxxxxxxxxx>
> > > > > > ... [messed up quoting]
> > > > >
> > > > > > > On Tue, Mar 17, 2026 at 02:12:56PM +0800, Richard Zhu wrote:
> > > > > > > > Add a force_suspend flag to allow platform drivers to
> > > > > > > > force the PCIe link into L2 state during suspend, even
> > > > > > > > when L1SS (ASPM L1
> > > > > > > > Sub-States) is enabled.
> > > > > > > >
> > > > > > > > By default, the DesignWare PCIe host controller skips L2
> > > > > > > > suspend when L1SS is supported to meet low resume latency
> > > > > > > > requirements for devices like NVMe. However, some
> > > > > > > > platforms like i.MX PCIe need to enter L2 state for proper
> > > > > > > > power management regardless of L1SS
> > > > > support.
> > > > > > > >
> > > > > > > > Enable force_suspend for i.MX PCIe to ensure the link
> > > > > > > > enters
> > > > > > > > L2 during system suspend.
> > > > > > >
> > > > > > > I'm a little bit skeptical about this.
> > > > > > >
> > > > > > > What exactly does a "low resume latency requirement" mean?
> > > > > > > Is this an actual functional requirement that's special to
> > > > > > > NVMe, or is it just the desire for low resume latency that
> > > > > > > everybody has for all devices?
> > > > > >
> > > > > > From my understanding, L1SS mode is characterized by lower
> > > > > > latency when compared to L2 or L3 modes.
> > > > > >
> > > > > > It can be used on all devices, avoiding frequent power on/off cycles.
> > > > > > NVMe can also extend the service life of the equipment.
> > > > >
> > > > > All the above applies to all platforms, so it's not an argument
> > > > > for i.MX-specific code here.
> > > > >
> > > > Hi Bjorn:
> > > > Thanks for your kindly review.
> > > > Yes, it is.
> > > > > > > Is there something special about i.MX here? Why do we want
> > > > > > > i.MX to be different from other host controllers?
> > > > > >
> > > > > > i.MX PCIe loses power supply during Deep Sleep Mode (DSM),
> > > > > > requiring full reinitialization after system wake-up.
> > > > >
> > > > > I don't know what DSM means in PCIe or how it would help justify
> > > > > this change.
> > > > >
> > > > i.MX PCIe power is gated off during suspend, requiring full
> > > > reinitialization on resume
> > > >
> > >
> > > Is this an unconditional behavior? What if the PCIe device is
> > > configured as a wakeup source like WOL, WOW? And if you connect
> > > NVMe, this behavior will result in resume failure as NVMe driver
> > > expects the power to be retained if ASPM is supported.
> >
> > Yes, this is unconditional behavior. The i.MX PCIe controller
> > exclusively supports sideband wakeup mechanisms, which operate
> > independently of the PCIe link state and device power configuration.
> >
>
> I believe you are referring to WAKE# as the sideband wakeup mechanism. If so,
> both host and device has to support WAKE#.
>
Exactly.

> > For devices configured as wakeup sources (WOL, WOW, etc.): The
> > sideband wakeup path bypasses the standard PCIe power management, so
> > these configurations do not impact the i.MX PCIe RC controller's
> > suspend/resume behavior.
> >
>
> Once user enables wakeup for a device, PCI core will configure PME_EN only if
> the device supports toggling WAKE# from D3Cold. So the wakeup functionality
> depends on device too, not just the RC.
>
Yes, you're right.

> > For NVMe devices with ASPM: While NVMe drivers typically expect power
> > retention when ASPM is enabled, the i.MX implementation's sideband
> > wakeup mechanism operates through a separate signaling path. The
> > wakeup functionality does not depend on maintaining PCIe link power,
> > thus avoiding conflicts with NVMe power state expectations.
> >
>
> There is no relation between WAKE# and NVMe. NVMe is a passive device, so
> it doesn't support WAKE#. With this patch alone, the NVMe driver won't
> resume (is ASPM is enabled). You need to tell the NVMe driver to perpare for
> power loss too. Maybe this patch can help you:
> https://lore.kern/
> el.org%2Fall%2F20251231162126.7728-1-manivannan.sadhasivam%40oss.qual
> comm.com%2F&data=05%7C02%7Chongxing.zhu%40nxp.com%7C0779f20d02
> 37440e7acc08de9476b368%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7
> C0%7C639111434711840964%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hc
> GkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIj
> oyfQ%3D%3D%7C0%7C%7C%7C&sdata=R6fDeMjP%2BX3n5LCYFfHI%2Ff80Ms8
> 64geMQH2DGgg05fA%3D&reserved=0
>
> But that patch will only help if your platform supports S2RAM through PSCI.
Thanks a lot, this patch is helpful.
Since, i.MX platforms support the S2RAM through PSCI.

One additional note regarding NVMe: ASPM (Active State Power Management) is
disabled locally on i.MX platforms for NVMe devices. This decision was made
after encountering a system hang issue similar to the one reported by Hans a
few months ago in his patch listed below.
https://lore.kernel.org/linux-nvme/20250502032051.920990-1-hans.zhang@xxxxxxxxxxx/

Best Regards
Richard Zhu
>
> - Mani
>
> --
> மணிவண்ணன் சதாசிவம்