Re: [PATCH 1/2] Documentation: clarify limitations of hibernation

From: Luigi Semenzato
Date: Thu Jan 30 2020 - 16:36:30 EST


On Thu, Jan 30, 2020 at 1:29 PM Rafael J. Wysocki <rafael@xxxxxxxxxx> wrote:
>
> On Thu, Jan 30, 2020 at 10:11 PM Luigi Semenzato <semenzato@xxxxxxxxxx> wrote:
> >
> > On Thu, Jan 30, 2020 at 12:50 PM Rafael J. Wysocki <rafael@xxxxxxxxxx> wrote:
> > >
> > > On Mon, Jan 27, 2020 at 6:21 PM Luigi Semenzato <semenzato@xxxxxxxxxx> wrote:
> > > >
> > > > On Mon, Jan 27, 2020 at 8:28 AM Rafael J. Wysocki <rafael@xxxxxxxxxx> wrote:
> > > > >
> > > > > On Mon, Jan 27, 2020 at 5:13 PM Luigi Semenzato <semenzato@xxxxxxxxxx> wrote:
> > > > > >
> > > > > > On Mon, Jan 27, 2020 at 6:16 AM Michal Hocko <mhocko@xxxxxxxxxx> wrote:
> > > > > > >
> > > > > > > On Fri 24-01-20 08:37:12, Luigi Semenzato wrote:
> > > > > > > [...]
> > > > > > > > The purpose of my documentation patch was to make it clearer that
> > > > > > > > hibernation may fail in situations in which suspend-to-RAM works; for
> > > > > > > > instance, when there is no swap, and anonymous pages are over 50% of
> > > > > > > > total RAM. I will send a new version of the patch which hopefully
> > > > > > > > makes this clearer.
> > > > > > >
> > > > > > > I was under impression that s2disk is pretty much impossible without any
> > > > > > > swap.
> > > > > >
> > > > > > I am not sure what you mean by "swap" here. S2disk needs a swap
> > > > > > partition for storing the image, but that partition is not used for
> > > > > > regular swap.
> > > > >
> > > > > That's not correct.
> > > > >
> > > > > The swap partition (or file) used by s2disk needs to be made active
> > > > > before it can use it and the mm subsystem is also able to use it for
> > > > > regular swap then.
> > > >
> > > > OK---I had this wrong, thanks.
> > > >
> > > > > > If there is no swap, but more than 50% of RAM is free
> > > > > > or reclaimable, s2disk works fine. If anonymous is more than 50%,
> > > > > > hibernation can still work, but swap needs to be set up (in addition
> > > > > > to the space for the hibernation image). The setup is not obvious and
> > > > > > I don't think that the documentation is clear on this.
> > > > >
> > > > > Well, the entire contents of RAM must be preserved, this way or
> > > > > another, during hibernation. That should be totally obvious to anyone
> > > > > using it really.
> > > >
> > > > Yes, that's obvious.
> > > >
> > > > > Some of the RAM contents is copies of data already there in the
> > > > > filesystems on persistent storage and that does not need to be saved
> > > > > again. Everything else must be saved and s2disk (and Linux
> > > > > hibernation in general) uses active swap space to save these things.
> > > > > This implies that in order to hibernate the system, you generally need
> > > > > the amount of swap space equal to the size of RAM minus the size of
> > > > > files mapped into memory.
> > > > >
> > > > > So, to be on the safe side, the total amount of swap space to be used
> > > > > for hibernation needs to match the size of RAM (even though
> > > > > realistically it may be smaller than that in the majority of cases).
> > > >
> > > > This all makes sense, but we do this:
> > > >
> > > > -- add resume=/dev/sdc to the command line
> > > > -- attach a disk (/dev/sdc) with size equal to RAM
> > > > -- mkswap /dev/sdc
> > > > -- swapon /dev/sdc
> > > > -- echo disk > /sys/power/state
> > > >
> > > > and the last operation fails with ENOMEM. Are we doing something
> > > > wrong? Are we hitting some other mm bug?
> > >
> > > I would expect this to work, so the fact that it doesn't work for you
> > > indicates a bug somewhere or at least an assumption that doesn't hold.
> > >
> > > Can you please remind me what you do to trigger the unexpected behavior?
> >
> > Yes, I create processes that use a large amount of anon memory, more
> > than 50% of RAM, like this:
> >
> > dd if=/dev/zero bs=1G count=1 | sleep infinity
> >
> > I think dd has a 2 GB limit, or around that number, so you'll need a
> > few of those.
>
> And then you get -ENOMEM from hibernate_preallocate_memory(), or from
> somewhere else?

That is correct. More precisely, preallocate_image_memory() doesn't
get enough pages, and then preallocate_image_highmem() either gets
nothing, or in any case too little.