RE: [Intel-wired-lan] [PATCH v2 1/1] e1000e: Undo e1000e_pm_freeze if __e1000_shutdown fails

From: Brown, Aaron F
Date: Tue Jun 06 2017 - 21:07:40 EST


> From: Intel-wired-lan [mailto:intel-wired-lan-bounces@xxxxxxxxxx] On Behalf
> Of Jeff Kirsher
> Sent: Tuesday, June 6, 2017 1:46 PM
> To: David Miller <davem@xxxxxxxxxxxxx>; Nikula, Jani
> <jani.nikula@xxxxxxxxx>
> Cc: Ursulin, Tvrtko <tvrtko.ursulin@xxxxxxxxx>; daniel.vetter@xxxxxxxx; intel-
> gfx@xxxxxxxxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx;
> jani.nikula@xxxxxxxxxxxxxxx; chris@xxxxxxxxxxxxxxxxxx; Ertman, David M
> <david.m.ertman@xxxxxxxxx>; intel-wired-lan@xxxxxxxxxxxxxxxx; dri-
> devel@xxxxxxxxxxxxxxxxxxxxx; netdev@xxxxxxxxxxxxxxx; airlied@xxxxxxxxx
> Subject: Re: [Intel-wired-lan] [PATCH v2 1/1] e1000e: Undo
> e1000e_pm_freeze if __e1000_shutdown fails
>
> On Fri, 2017-06-02 at 14:14 -0400, David Miller wrote:
> > From: Jani Nikula <jani.nikula@xxxxxxxxx>
> > Date: Wed, 31 May 2017 18:50:43 +0300
> >
> > > From: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
> > >
> > > An error during suspend (e100e_pm_suspend),
> >
> > ...
> > > lead to complete failure:
> >
> > ...
> > > The unwind failures stems from commit 2800209994f8 ("e1000e:
> > > Refactor PM
> > > flows"), but it may be a later patch that introduced the non-
> > > recoverable
> > > behaviour.
> > >
> > > Fixes: 2800209994f8 ("e1000e: Refactor PM flows")
> > > Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99847
> > > Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx>
> > > Cc: Jeff Kirsher <jeffrey.t.kirsher@xxxxxxxxx>
> > > Cc: Dave Ertman <davidx.m.ertman@xxxxxxxxx>
> > > Cc: Bruce Allan <bruce.w.allan@xxxxxxxxx>
> > > Cc: intel-wired-lan@xxxxxxxxxxxxxxxx
> > > Cc: netdev@xxxxxxxxxxxxxxx
> > > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
> > > [Jani: bikeshed repainted]
> > > Signed-off-by: Jani Nikula <jani.nikula@xxxxxxxxx>
> >
> > Jeff, please make sure this gets submitted to me soon.
>
> Expect it later tonight, just finishing up testing.

Tested-by: Aaron Brown <aaron.f.brown@xxxxxxxxx>