Re: [PATCH] nvme-pci: Save PCI state before putting drive into deepest state

From: Rafael J. Wysocki
Date: Wed Sep 18 2019 - 18:04:12 EST


On Thu, Sep 19, 2019 at 12:00 AM <Mario.Limonciello@xxxxxxxx> wrote:
>
> > -----Original Message-----
> > From: Rafael J. Wysocki <rjw@xxxxxxxxxxxxx>
> > Sent: Wednesday, September 18, 2019 4:57 PM
> > To: Limonciello, Mario
> > Cc: kbusch@xxxxxxxxxx; axboe@xxxxxx; hch@xxxxxx; sagi@xxxxxxxxxxx; linux-
> > nvme@xxxxxxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; Hong, Ryan; Wang,
> > Crag; sjg@xxxxxxxxxx; Dominguez, Jared; linux-pci@xxxxxxxxxxxxxxx; linux-
> > pm@xxxxxxxxxxxxxxx
> > Subject: Re: [PATCH] nvme-pci: Save PCI state before putting drive into deepest
> > state
> >
> >
> > [EXTERNAL EMAIL]
> >
> > On Wednesday, September 18, 2019 11:43:28 PM CEST
> > Mario.Limonciello@xxxxxxxx wrote:
> > > > -----Original Message-----
> > > > From: Rafael J. Wysocki <rjw@xxxxxxxxxxxxx>
> > > > Sent: Wednesday, September 18, 2019 4:31 PM
> > > > To: Limonciello, Mario
> > > > Cc: Keith Busch; Jens Axboe; Christoph Hellwig; Sagi Grimberg; linux-
> > > > nvme@xxxxxxxxxxxxxxxxxxx; LKML; Hong, Ryan; Wang, Crag; sjg@xxxxxxxxxx;
> > > > Dominguez, Jared; Linux PCI; Linux PM
> > > > Subject: Re: [PATCH] nvme-pci: Save PCI state before putting drive into
> > deepest
> > > > state
> > > >
> > > >
> > > > [EXTERNAL EMAIL]
> > > >
> > > > On Thursday, September 12, 2019 1:42:33 AM CEST Mario Limonciello wrote:
> > > > > The action of saving the PCI state will cause numerous PCI configuration
> > > > > space reads which depending upon the vendor implementation may cause
> > > > > the drive to exit the deepest NVMe state.
> > > > >
> > > > > In these cases ASPM will typically resolve the PCIe link state and APST
> > > > > may resolve the NVMe power state. However it has also been observed
> > > > > that this register access after quiesced will cause PC10 failure
> > > > > on some device combinations.
> > > > >
> > > > > To resolve this, move the PCI state saving to before SetFeatures has been
> > > > > called. This has been proven to resolve the issue across a 5000 sample
> > > > > test on previously failing disk/system combinations.
> > > >
> > > > This sounds reasonable to me, but it would be nice to CC that to linux-pm
> > > > and/or linux-pci too.
> > > >
> > > > > Signed-off-by: Mario Limonciello <mario.limonciello@xxxxxxxx>
> > > > > ---
> > > > > drivers/nvme/host/pci.c | 13 +++++++------
> > > > > 1 file changed, 7 insertions(+), 6 deletions(-)
> > > > >
> > > > > diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> > > > > index 732d5b6..9b3fed4 100644
> > > > > --- a/drivers/nvme/host/pci.c
> > > > > +++ b/drivers/nvme/host/pci.c
> > > > > @@ -2894,6 +2894,13 @@ static int nvme_suspend(struct device *dev)
> > > > > if (ret < 0)
> > > > > goto unfreeze;
> > > > >
> > > > > + /*
> > > > > + * A saved state prevents pci pm from generically controlling the
> > > > > + * device's power. If we're using protocol specific settings, we don't
> > > > > + * want pci interfering.
> > > > > + */
> > > > > + pci_save_state(pdev);
> > > > > +
> > > > > ret = nvme_set_power_state(ctrl, ctrl->npss);
> > > > > if (ret < 0)
> > > > > goto unfreeze;
> > > > > @@ -2908,12 +2915,6 @@ static int nvme_suspend(struct device *dev)
> > > >
> > > > This is the case in which the PCI layer is expected to put the device into
> > > > D3, so you need
> > > >
> > > > pdev->state_saved = 0;
> > > >
> > > > at this point, because you have saved the config space already.
> > > >
> > > > > ret = 0;
> > > > > goto unfreeze;
> > > >
> > > > And here you don't need to jump to "unfreeze" any more.
> > > >
> > > > > }
> > > > > - /*
> > > > > - * A saved state prevents pci pm from generically controlling the
> > > > > - * device's power. If we're using protocol specific settings, we don't
> > > > > - * want pci interfering.
> > > > > - */
> > > > > - pci_save_state(pdev);
> > > > > unfreeze:
> > > > > nvme_unfreeze(ctrl);
> > > > > return ret;
> > > > >
> > > >
> > > >
> > > >
> > >
> > > Thanks, I actually followed up with something along that line in a v2 sent out
> > > today. My apology you weren't in CC, but here is a weblink to it.
> > > http://lists.infradead.org/pipermail/linux-nvme/2019-September/027251.html
> > >
> >
> > I don't think that pci_load_saved_state() will work, because it sets
> > state_saved at the end again (if all goes well). You simply only need to
> > clear state_saved here.
>
> Explicitly calling it with NULL as the saved state to restore seemed to have that effect
> of clearing state (there is an explicit check in there if it's NULL to just return 0).

Ah, OK, right.

I still would rather clear the flag directly, though, as using
pci_load_saved_state() for that is just more convoluted. :-)