Re: [PATCH v2 1/2] PCI/ASPM: Override the ASPM and Clock PM states set by BIOS for devicetree platforms
From: Claudiu Beznea
Date: Mon Feb 16 2026 - 12:20:16 EST
Hi,
On 1/22/26 19:01, Manivannan Sadhasivam wrote:
On Thu, Jan 22, 2026 at 09:29:03AM -0600, Bjorn Helgaas wrote:
[+cc NVMe folks]
On Thu, Jan 22, 2026 at 12:12:42PM +0000, Jon Hunter wrote:
...
Since this commit was added in Linux v6.18, I have been observing a suspend
test failures on some of our boards. The suspend test suspends the devices
for 20 secs and before this change the board would resume in about ~27 secs
(including the 20 sec sleep). After this change the board would take over 80
secs to resume and this triggered a failure.
Looking at the logs, I can see it is the NVMe device on the board that is
having an issue, and I see the reset failing ...
[ 945.754939] r8169 0007:01:00.0 enP7p1s0: Link is Up - 1Gbps/Full -
flow control rx/tx
[ 1002.467432] nvme nvme0: I/O tag 12 (400c) opcode 0x9 (Admin Cmd) QID
0 timeout, reset controller
[ 1002.493713] nvme nvme0: 12/0/0 default/read/poll queues
[ 1003.050448] nvme nvme0: ctrl state 1 is not RESETTING
[ 1003.050481] OOM killer enabled.
[ 1003.054035] nvme nvme0: Disabling device after reset failure: -19
From the above timestamps the delay is coming from the NVMe. I see this
issue on several boards with different NVMe devices and I can workaround
this by disabling ASPM L0/L1 for these devices ...
DECLARE_PCI_FIXUP_HEADER(0x15b7, 0x5011, quirk_disable_aspm_l0s_l1);
DECLARE_PCI_FIXUP_HEADER(0x15b7, 0x5036, quirk_disable_aspm_l0s_l1);
DECLARE_PCI_FIXUP_HEADER(0x1b4b, 0x1322, quirk_disable_aspm_l0s_l1);
DECLARE_PCI_FIXUP_HEADER(0xc0a9, 0x540a, quirk_disable_aspm_l0s_l1);
I am curious if you have seen any similar issues?
Other PCIe devices seem to be OK (like the realtek r8169) but just
the NVMe is having issues. So I am trying to figure out the best way
to resolve this?
For context, "this commit" refers to f3ac2ff14834, modified by
df5192d9bb0e:
f3ac2ff14834 ("PCI/ASPM: Enable all ClockPM and ASPM states for devicetree platforms")
df5192d9bb0e ("PCI/ASPM: Enable only L0s and L1 for devicetree platforms")
The fact that this suspend issue only affects NVMe reminds me of the
code in dw_pcie_suspend_noirq() [1] that bails out early if L1 is
enabled because of some NVMe expectation:
dw_pcie_suspend_noirq()
{
...
/*
* If L1SS is supported, then do not put the link into L2 as some
* devices such as NVMe expect low resume latency.
*/
if (dw_pcie_readw_dbi(pci, offset + PCI_EXP_LNKCTL) & PCI_EXP_LNKCTL_ASPM_L1)
return 0;
...
That suggests there's some NVMe/ASPM interaction that the PCI core
doesn't understand yet.
We have this check in place since NVMe driver keeps the device in D0 and expects
the link to be in L1ss on platforms not passing below checks:
if (pm_suspend_via_firmware() || !ctrl->npss ||
!pcie_aspm_enabled(pdev) ||
(ndev->ctrl.quirks & NVME_QUIRK_SIMPLE_SUSPEND))
We noticed a similar issue with the Renesas RZ/G3S host driver and NVMe devices. We currently have 2 SoCs where we identified this problem (RZ/G3S and RZ/G3E), both present on SoM modules, and the SoM modules could be connected to the same carrier board where the PCIe signals are routed and connectors exists. On the carrier board we have 2 connectors were we can attach NVMe devices, one M.2 Key B and one PCIe x4 connector (https://www.amphenol-cs.com/product/10061913111plf.html).
The issue described in this thread is reproducible for us only after suspend and only for the NVMe device connected to the the PCIe x4 connector. The device is working correctly just after boot. On suspend, power to the most SoC components (including PCIe) is lost but the endpoints remains powered.
The issue is not reproducible if the following command is executed before suspend: echo performance > /sys/module/pcie_aspm/parameters/policy
The difference we identified in terms of signals connected from the SoC to the on board connectors relies in the CLKREQ#. This signal is only connected to the PCIe x4 slot.
On RZ/G3E the CLKREQ# is configured as a individual GPIO pin. On RZ/G3S it is muxed by the pin controller with PCIe function. We tried on RZ/G3E to not configure the CLKREQ# pin at all and with this the NVMe connected on the PCIe x4 slot started to work even after suspend. We cannot reproduce the same behavior on RZ/G3S.
Initially, we considered we might have to update the existing code to do specific configuration for the boards were CLKREQ# is not connected (through supports-clkreq DT property that some controllers are using).
Currently, the manual is unclear on how to control CLKREQ#.
Apart from the suggestions mentioned in [1], could you please let me know if you have any others?
Thank you,
Claudiu
[1] https://lore.kernel.org/all/unc5zefwndgcv7wufaezz3gkg3qtaymkjlmymhyqdqwzn3wybl@ow2rhbyt772h/