Re: [PATCH v1] PCI: imx6: Add force_suspend flag to override L1SS suspend skip
From: mani@xxxxxxxxxx
Date: Tue Apr 07 2026 - 03:24:48 EST
On Tue, Apr 07, 2026 at 03:31:57AM +0000, Hongxing Zhu wrote:
> > -----Original Message-----
> > From: mani@xxxxxxxxxx <mani@xxxxxxxxxx>
> > Sent: 2026年4月4日 1:03
> > To: Hongxing Zhu <hongxing.zhu@xxxxxxx>
> > Cc: Bjorn Helgaas <helgaas@xxxxxxxxxx>; Frank Li <frank.li@xxxxxxx>;
> > jingoohan1@xxxxxxxxx; l.stach@xxxxxxxxxxxxxx; lpieralisi@xxxxxxxxxx;
> > kwilczynski@xxxxxxxxxx; robh@xxxxxxxxxx; bhelgaas@xxxxxxxxxx;
> > s.hauer@xxxxxxxxxxxxxx; kernel@xxxxxxxxxxxxxx; festevam@xxxxxxxxx;
> > linux-pci@xxxxxxxxxxxxxxx; linux-arm-kernel@xxxxxxxxxxxxxxxxxxx;
> > imx@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; stable@xxxxxxxxxxxxxxx
> > Subject: Re: [PATCH v1] PCI: imx6: Add force_suspend flag to override L1SS
> > suspend skip
> >
> > On Tue, Mar 24, 2026 at 02:01:58AM +0000, Hongxing Zhu wrote:
> > > > -----Original Message-----
> > > > From: Bjorn Helgaas <helgaas@xxxxxxxxxx>
> > > > Sent: 2026年3月24日 6:09
> > > > To: Hongxing Zhu <hongxing.zhu@xxxxxxx>
> > > > Cc: Frank Li <frank.li@xxxxxxx>; jingoohan1@xxxxxxxxx;
> > > > l.stach@xxxxxxxxxxxxxx; lpieralisi@xxxxxxxxxx;
> > > > kwilczynski@xxxxxxxxxx; mani@xxxxxxxxxx; robh@xxxxxxxxxx;
> > > > bhelgaas@xxxxxxxxxx; s.hauer@xxxxxxxxxxxxxx; kernel@xxxxxxxxxxxxxx;
> > > > festevam@xxxxxxxxx; linux-pci@xxxxxxxxxxxxxxx;
> > > > linux-arm-kernel@xxxxxxxxxxxxxxxxxxx;
> > > > imx@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx;
> > > > stable@xxxxxxxxxxxxxxx
> > > > Subject: Re: [PATCH v1] PCI: imx6: Add force_suspend flag to
> > > > override L1SS suspend skip
> > > >
> > > > On Wed, Mar 18, 2026 at 02:55:45AM +0000, Hongxing Zhu wrote:
> > > > > > -----Original Message-----
> > > > > > From: Bjorn Helgaas <helgaas@xxxxxxxxxx>
> > > > > ... [messed up quoting]
> > > >
> > > > > > On Tue, Mar 17, 2026 at 02:12:56PM +0800, Richard Zhu wrote:
> > > > > > > Add a force_suspend flag to allow platform drivers to force
> > > > > > > the PCIe link into L2 state during suspend, even when L1SS
> > > > > > > (ASPM L1
> > > > > > > Sub-States) is enabled.
> > > > > > >
> > > > > > > By default, the DesignWare PCIe host controller skips L2
> > > > > > > suspend when L1SS is supported to meet low resume latency
> > > > > > > requirements for devices like NVMe. However, some platforms
> > > > > > > like i.MX PCIe need to enter L2 state for proper power
> > > > > > > management regardless of L1SS
> > > > support.
> > > > > > >
> > > > > > > Enable force_suspend for i.MX PCIe to ensure the link enters
> > > > > > > L2 during system suspend.
> > > > > >
> > > > > > I'm a little bit skeptical about this.
> > > > > >
> > > > > > What exactly does a "low resume latency requirement" mean? Is
> > > > > > this an actual functional requirement that's special to NVMe, or
> > > > > > is it just the desire for low resume latency that everybody has
> > > > > > for all devices?
> > > > >
> > > > > From my understanding, L1SS mode is characterized by lower latency
> > > > > when compared to L2 or L3 modes.
> > > > >
> > > > > It can be used on all devices, avoiding frequent power on/off cycles.
> > > > > NVMe can also extend the service life of the equipment.
> > > >
> > > > All the above applies to all platforms, so it's not an argument for
> > > > i.MX-specific code here.
> > > >
> > > Hi Bjorn:
> > > Thanks for your kindly review.
> > > Yes, it is.
> > > > > > Is there something special about i.MX here? Why do we want i.MX
> > > > > > to be different from other host controllers?
> > > > >
> > > > > i.MX PCIe loses power supply during Deep Sleep Mode (DSM),
> > > > > requiring full reinitialization after system wake-up.
> > > >
> > > > I don't know what DSM means in PCIe or how it would help justify
> > > > this change.
> > > >
> > > i.MX PCIe power is gated off during suspend, requiring full
> > > reinitialization on resume
> > >
> >
> > Is this an unconditional behavior? What if the PCIe device is configured as a
> > wakeup source like WOL, WOW? And if you connect NVMe, this behavior will
> > result in resume failure as NVMe driver expects the power to be retained if
> > ASPM is supported.
>
> Yes, this is unconditional behavior. The i.MX PCIe controller exclusively
> supports sideband wakeup mechanisms, which operate independently of the
> PCIe link state and device power configuration.
>
I believe you are referring to WAKE# as the sideband wakeup mechanism. If so,
both host and device has to support WAKE#.
> For devices configured as wakeup sources (WOL, WOW, etc.): The sideband
> wakeup path bypasses the standard PCIe power management, so these
> configurations do not impact the i.MX PCIe RC controller's suspend/resume
> behavior.
>
Once user enables wakeup for a device, PCI core will configure PME_EN only if
the device supports toggling WAKE# from D3Cold. So the wakeup functionality
depends on device too, not just the RC.
> For NVMe devices with ASPM: While NVMe drivers typically expect power
> retention when ASPM is enabled, the i.MX implementation's sideband wakeup
> mechanism operates through a separate signaling path. The wakeup functionality
> does not depend on maintaining PCIe link power, thus avoiding conflicts with
> NVMe power state expectations.
>
There is no relation between WAKE# and NVMe. NVMe is a passive device, so it
doesn't support WAKE#. With this patch alone, the NVMe driver won't resume (is
ASPM is enabled). You need to tell the NVMe driver to perpare for power loss
too. Maybe this patch can help you:
https://lore.kernel.org/all/20251231162126.7728-1-manivannan.sadhasivam@xxxxxxxxxxxxxxxx/
But that patch will only help if your platform supports S2RAM through PSCI.
- Mani
--
மணிவண்ணன் சதாசிவம்