Re: [PATCH v1 3/3] PM: sleep: Fix possible deadlocks in core system-wide PM code

From: Rafael J. Wysocki
Date: Wed Jan 03 2024 - 05:28:39 EST


On Wed, Jan 3, 2024 at 11:17 AM Ulf Hansson <ulf.hansson@xxxxxxxxxx> wrote:
>
> On Tue, 2 Jan 2024 at 14:54, Rafael J. Wysocki <rafael@xxxxxxxxxx> wrote:
> >
> > On Tue, Jan 2, 2024 at 2:35 PM Ulf Hansson <ulf.hansson@xxxxxxxxxx> wrote:
> > >
> > > On Wed, 27 Dec 2023 at 21:41, Rafael J. Wysocki <rjw@xxxxxxxxxxxxx> wrote:
> > > >
> > > > From: Rafael J. Wysocki <rafael.j.wysocki@xxxxxxxxx>
> > > >
> > > > It is reported that in low-memory situations the system-wide resume core
> > > > code deadlocks, because async_schedule_dev() executes its argument
> > > > function synchronously if it cannot allocate memory (an not only then)
> > > > and that function attempts to acquire a mutex that is already held.
> > > >
> > > > Address this by changing the code in question to use
> > > > async_schedule_dev_nocall() for scheduling the asynchronous
> > > > execution of device suspend and resume functions and to directly
> > > > run them synchronously if async_schedule_dev_nocall() returns false.
> > > >
> > > > Fixes: 09beebd8f93b ("PM: sleep: core: Switch back to async_schedule_dev()")
> > > > Link: https://lore.kernel.org/linux-pm/ZYvjiqX6EsL15moe@perf/
> > > > Reported-by: Youngmin Nam <youngmin.nam@xxxxxxxxxxx>
> > > > Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@xxxxxxxxx>
> > > > ---
> > > >
> > > > The commit pointed to by the Fixes: tag is the last one that modified
> > > > the code in question, even though the bug had been there already before.
> > > >
> > > > Still, the fix will not apply to the code before that commit.
> > >
> > > An option could be to just do "Cc: stable@xxxxxxxxxxxxxxx # v5.7+"
> > > instead of pointing to a commit with a Fixes tag.
> >
> > Right, but one can argue that every commit with a "Cc: stable" tag is
> > a fix, so it should carry a Fixes: tag too anyway.
>
> Yes, certainly. But in this case it's more questionable as it's not
> really fixing the commit it points out.
>
> Note that, I have no strong opinion here, but maybe Greg has a preferred way?
>
> >
> > > >
> > > > ---
> > > > drivers/base/power/main.c | 148 +++++++++++++++++++++-------------------------
> > > > 1 file changed, 68 insertions(+), 80 deletions(-)
> > > >
> > > > Index: linux-pm/drivers/base/power/main.c
> > > > ===================================================================
> > > > --- linux-pm.orig/drivers/base/power/main.c
> > > > +++ linux-pm/drivers/base/power/main.c
> > > > @@ -579,7 +579,7 @@ bool dev_pm_skip_resume(struct device *d
> > > > }
> > > >
> > > > /**
> > > > - * device_resume_noirq - Execute a "noirq resume" callback for given device.
> > > > + * __device_resume_noirq - Execute a "noirq resume" callback for given device.
> > > > * @dev: Device to handle.
> > > > * @state: PM transition of the system being carried out.
> > > > * @async: If true, the device is being resumed asynchronously.
> > > > @@ -587,7 +587,7 @@ bool dev_pm_skip_resume(struct device *d
> > > > * The driver of @dev will not receive interrupts while this function is being
> > > > * executed.
> > > > */
> > > > -static int device_resume_noirq(struct device *dev, pm_message_t state, bool async)
> > > > +static void __device_resume_noirq(struct device *dev, pm_message_t state, bool async)
> > > > {
> > > > pm_callback_t callback = NULL;
> > > > const char *info = NULL;
> > > > @@ -655,7 +655,13 @@ Skip:
> > > > Out:
> > > > complete_all(&dev->power.completion);
> > > > TRACE_RESUME(error);
> > > > - return error;
> > > > +
> > > > + if (error) {
> > > > + suspend_stats.failed_resume_noirq++;
> > > > + dpm_save_failed_step(SUSPEND_RESUME_NOIRQ);
> > > > + dpm_save_failed_dev(dev_name(dev));
> > > > + pm_dev_err(dev, state, async ? " async noirq" : " noirq", error);
> > > > + }
> > > > }
> > > >
> > > > static bool is_async(struct device *dev)
> > > > @@ -668,11 +674,15 @@ static bool dpm_async_fn(struct device *
> > > > {
> > > > reinit_completion(&dev->power.completion);
> > > >
> > > > - if (is_async(dev)) {
> > > > - get_device(dev);
> > > > - async_schedule_dev(func, dev);
> > > > + if (!is_async(dev))
> > > > + return false;
> > > > +
> > > > + get_device(dev);
> > > > +
> > > > + if (async_schedule_dev_nocall(func, dev))
> > > > return true;
> > > > - }
> > > > +
> > > > + put_device(dev);
> > > >
> > > > return false;
> > > > }
> > > > @@ -680,15 +690,19 @@ static bool dpm_async_fn(struct device *
> > > > static void async_resume_noirq(void *data, async_cookie_t cookie)
> > > > {
> > > > struct device *dev = data;
> > > > - int error;
> > > > -
> > > > - error = device_resume_noirq(dev, pm_transition, true);
> > > > - if (error)
> > > > - pm_dev_err(dev, pm_transition, " async", error);
> > > >
> > > > + __device_resume_noirq(dev, pm_transition, true);
> > > > put_device(dev);
> > > > }
> > > >
> > > > +static void device_resume_noirq(struct device *dev)
> > > > +{
> > > > + if (dpm_async_fn(dev, async_resume_noirq))
> > > > + return;
> > > > +
> > > > + __device_resume_noirq(dev, pm_transition, false);
> > > > +}
> > > > +
> > > > static void dpm_noirq_resume_devices(pm_message_t state)
> > > > {
> > > > struct device *dev;
> > > > @@ -698,14 +712,6 @@ static void dpm_noirq_resume_devices(pm_
> > > > mutex_lock(&dpm_list_mtx);
> > > > pm_transition = state;
> > > >
> > > > - /*
> > > > - * Advanced the async threads upfront,
> > > > - * in case the starting of async threads is
> > > > - * delayed by non-async resuming devices.
> > > > - */
> > > > - list_for_each_entry(dev, &dpm_noirq_list, power.entry)
> > > > - dpm_async_fn(dev, async_resume_noirq);
> > > > -
> > >
> > > If I understand correctly, this means that we are no longer going to
> > > run the async devices upfront, right?
> >
> > Right.
> >
> > > Depending on how devices get ordered in the dpm_noirq_list, it sounds
> > > like the above could have a negative impact on the total resume time!?
> >
> > It could, but it is unclear at this time whether or not it will.
> >
> > > Of course, if all devices would be async capable this wouldn't be a
> > > problem...
> >
> > Sure.
> >
> > So the existing behavior can be restored with the help of an
> > additional device flag, but I didn't decide to add such a flag just
> > yet.
> >
> > I'll probably do it in 6.9, unless the performance impact is serious
> > enough, in which case it can be added earlier.
> >
> > I still would prefer to get to a point at which the suspend and resume
> > paths are analogous (from the async POV) and that's what happens after
> > this patch, so I'd say that IMO it is better to address any
> > performance regressions on top of it.
>
> Fair enough!
>
> [...]
>
> Feel free to add my Reviewed-by for the series!

Thanks!