Re: [PATCH 2/2] PCI: vmd: Enable ASPM for mobile platforms

From: Bjorn Helgaas
Date: Fri Oct 02 2020 - 18:18:34 EST


On Wed, Sep 30, 2020 at 04:24:54PM +0800, Kai-Heng Feng wrote:
> BIOS may not be able to program ASPM for links behind VMD, prevent Intel
> SoC from entering deeper power saving state.

It's not a question of BIOS not being *able* to configure ASPM. I
think BIOS could do it, at least in principle, if it had a driver for
VMD. Actually, it probably *does* include some sort of VMD code
because it sounds like BIOS can assign some Root Ports to appear
either as regular Root Ports or behind the VMD.

Since this issue is directly related to the unusual VMD topology, I
think it would be worth a quick recap here. Maybe something like:

VMD is a Root Complex Integrated Endpoint that acts as a host bridge
to a secondary PCIe domain. BIOS can reassign one or more Root
Ports to appear within a VMD domain instead of the primary domain.

However, BIOS may not enable ASPM for the hierarchies behind a VMD,
...

(This is based on the commit log from 185a383ada2e ("x86/PCI: Add
driver for Intel Volume Management Device (VMD)")).

But we still have the problem that CONFIG_PCIEASPM_DEFAULT=y means
"use the BIOS defaults", and this patch would make it so we use the
BIOS defaults *except* for things behind VMD.

- Why should VMD be a special case?

- How would we document such a special case?

- If we built with CONFIG_PCIEASPM_POWERSAVE=y, would that solve the
SoC power state problem?

- What issues would CONFIG_PCIEASPM_POWERSAVE=y introduce?

Link to previous discussion for the archives:
https://lore.kernel.org/r/49A36179-D336-4A5E-8B7A-A632833AE6B2@xxxxxxxxxxxxx

> So enable ASPM for links behind VMD to increase battery life.
>
> Signed-off-by: Kai-Heng Feng <kai.heng.feng@xxxxxxxxxxxxx>
> ---
> drivers/pci/controller/vmd.c | 22 +++++++++++++++++++++-
> 1 file changed, 21 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
> index f69ef8c89f72..058fdef9c566 100644
> --- a/drivers/pci/controller/vmd.c
> +++ b/drivers/pci/controller/vmd.c
> @@ -417,6 +417,22 @@ static int vmd_find_free_domain(void)
> return domain + 1;
> }
>
> +static const struct pci_device_id vmd_mobile_bridge_tbl[] = {
> + { PCI_VDEVICE(INTEL, 0x9a09) },
> + { PCI_VDEVICE(INTEL, 0xa0b0) },
> + { PCI_VDEVICE(INTEL, 0xa0bc) },
> + { }
> +};
> +
> +static int vmd_enable_aspm(struct device *dev, void *data)
> +{
> + struct pci_dev *pdev = to_pci_dev(dev);
> +
> + pci_enable_link_state(pdev, PCIE_LINK_STATE_ALL);
> +
> + return 0;
> +}
> +
> static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
> {
> struct pci_sysdata *sd = &vmd->sysdata;
> @@ -603,8 +619,12 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
> * and will fail pcie_bus_configure_settings() early. It can instead be
> * run on each of the real root ports.
> */
> - list_for_each_entry(child, &vmd->bus->children, node)
> + list_for_each_entry(child, &vmd->bus->children, node) {
> + if (pci_match_id(vmd_mobile_bridge_tbl, child->self))
> + device_for_each_child(&child->self->dev, NULL, vmd_enable_aspm);

Wouldn't something like this be sufficient?

list_for_each_entry(dev, &child->devices, bus_list)
vmd_enable_aspm(dev);

> pcie_bus_configure_settings(child);
> + }
>
> pci_bus_add_devices(vmd->bus);
>
> --
> 2.17.1
>