Re: [PATCH v4] PCI: Enable runtime pm of the host bridge

From: Rafael J. Wysocki
Date: Thu Sep 12 2024 - 07:57:44 EST


On Thu, Sep 12, 2024 at 1:52 PM Krishna Chaitanya Chundru
<quic_krichai@xxxxxxxxxxx> wrote:
>
>
>
> On 9/12/2024 5:12 PM, Rafael J. Wysocki wrote:
> > On Fri, Aug 16, 2024 at 10:45 PM Bjorn Helgaas <helgaas@xxxxxxxxxx> wrote:
> >>
> >> [+cc Rafael, Mayank, Markus (when people have commented on previous
> >> versions, please cc them on new versions). I'm still hoping Rafael
> >> will have a chance to chime in]
> >>
> >> On Mon, Jul 08, 2024 at 10:19:40AM +0530, Krishna chaitanya chundru wrote:
> >>> The Controller driver is the parent device of the PCIe host bridge,
> >>> PCI-PCI bridge and PCIe endpoint as shown below.
> >>>
> >>> PCIe controller(Top level parent & parent of host bridge)
> >>> |
> >>> v
> >>> PCIe Host bridge(Parent of PCI-PCI bridge)
> >>> |
> >>> v
> >>> PCI-PCI bridge(Parent of endpoint driver)
> >>> |
> >>> v
> >>> PCIe endpoint driver
> >>>
> >>> Now, when the controller device goes to runtime suspend, PM framework
> >>> will check the runtime PM state of the child device (host bridge) and
> >>> will find it to be disabled.
> >>
> >> I guess "will find it to be disabled" means the child (host bridge)
> >> has runtime PM disabled, not that the child device is disabled, right?
> >>
> >>> So it will allow the parent (controller
> >>> device) to go to runtime suspend. Only if the child device's state was
> >>> 'active' it will prevent the parent to get suspended.
> >>
> >> Can we include a hint like the name of the function where the PM
> >> framework decides this? Maybe this is rpm_check_suspend_allowed()?
> >>
> >> rpm_check_suspend_allowed() checks ".ignore_children", which sounds
> >> like it could be related, and AFAICS .ignore_children == false here,
> >> so .child_count should be relevant.
> >>
> >> But I'm still confused about why we can runtime suspend a bridge that
> >> leads to devices that are not suspended.
> >
> > That should only be possible if runtime PM is disabled for those devices.
> >
> >>> Since runtime PM is disabled for host bridge, the state of the child
> >>> devices under the host bridge is not taken into account by PM framework
> >>> for the top level parent, PCIe controller. So PM framework, allows
> >>> the controller driver to enter runtime PM irrespective of the state
> >>> of the devices under the host bridge. And this causes the topology
> >>> breakage and also possible PM issues like controller driver goes to
> >>> runtime suspend while endpoint driver is doing some transfers.
> >
> > Why is it a good idea to enable runtime PM for a PCIe controller?
> >
> PCIe controller can do certain actions like keeping low power state,
> remove bandwidth votes etc as part of runtime suspend as when we know
> the client drivers already runtime suspended.

Surely they can, but enabling runtime PM for devices that have
children with runtime PM disabled and where those children have
children with runtime PM enabled is a bug.

> >> What does "topology breakage" mean? Do you mean something other than
> >> the fact that an endpoint DMA might fail if the controller is
> >> suspended?
> >>
> >>> So enable runtime PM for the host bridge, so that controller driver
> >>> goes to suspend only when all child devices goes to runtime suspend.
> >
> > This by itself makes sense to me.
> >
> >> IIUC, the one-sentence description here is that previously, the PCI
> >> host controller could be runtime suspended even while an endpoint was
> >> active, which caused DMA failures. And this patch changes that so the
> >> host controller is only runtime suspended after the entire hierarchy
> >> below it is runtime suspended? Is that right?
> >>
> >>> Signed-off-by: Krishna chaitanya chundru <quic_krichai@xxxxxxxxxxx>
> >>> ---
> >>> Changes in v4:
> >>
> >> (Note: v4 applies cleanly to v6.10-rc1 and to v6.11-rc1 with a small
> >> offset).
> >>
> >>> - Changed pm_runtime_enable() to devm_pm_runtime_enable() (suggested by mayank)
> >>> - Link to v3: https://lore.kernel.org/lkml/20240609-runtime_pm-v3-1-3d0460b49d60@xxxxxxxxxxx/
> >>> Changes in v3:
> >>> - Moved the runtime API call's from the dwc driver to PCI framework
> >>> as it is applicable for all (suggested by mani)
> >>> - Updated the commit message.
> >>> - Link to v2: https://lore.kernel.org/all/20240305-runtime_pm_enable-v2-1-a849b74091d1@xxxxxxxxxxx
> >>> Changes in v2:
> >>> - Updated commit message as suggested by mani.
> >>> - Link to v1: https://lore.kernel.org/r/20240219-runtime_pm_enable-v1-1-d39660310504@xxxxxxxxxxx
> >>> ---
> >>>
> >>> ---
> >>> drivers/pci/probe.c | 4 ++++
> >>> 1 file changed, 4 insertions(+)
> >>>
> >>> diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
> >>> index 8e696e547565..fd49563a44d9 100644
> >>> --- a/drivers/pci/probe.c
> >>> +++ b/drivers/pci/probe.c
> >>> @@ -3096,6 +3096,10 @@ int pci_host_probe(struct pci_host_bridge *bridge)
> >>> }
> >>>
> >>> pci_bus_add_devices(bus);
> >>> +
> >>> + pm_runtime_set_active(&bridge->dev);
> >>> + devm_pm_runtime_enable(&bridge->dev);
> >>> +
> >>> return 0;
> >>> }
> >>> EXPORT_SYMBOL_GPL(pci_host_probe);
> >
> > This will effectively prevent the host bridge from being
> > runtime-suspended at all IIUC, so the PCIe controller will never
> > suspend too after this change.
> >
> No we are having a different observations here.
> Without this change the PCIe controller driver can go to runtime suspend
> without considering the state of the client drivers i.e even when the
> client drivers are active.
> After adding this change we see the pcie controller is getting runtime
> suspended only after the client drivers are runtime suspended which is
> the expected behaviour.

OK, but then when and how is it going to be resumed?