Re: [TuxOnIce-devel] [RFC] TuxOnIce

From: Nigel Cunningham
Date: Mon May 25 2009 - 17:49:18 EST


Hi.

On Mon, 2009-05-25 at 15:26 +0200, Pavel Machek wrote:
> On Mon 2009-05-25 15:22:26, Oliver Neukum wrote:
> > Am Montag, 25. Mai 2009 14:32:28 schrieb Pavel Machek:
> > > > I'm going to try to. Unfortunately, they'll require what's basically a
> > > > group-up redesign of the basic algorithm, because to get maximum
> > > > reliability, you need to carefully account for the amount of storage
> > > > you're going to need and the amount of memory you have available, and
> > > > 'prepare' the image prior to doing the atomic copy.
> > >
> > > I don't quite get it; why is that needed?
> > >
> > > If there's not enough swap available, swsusp should freeze, realize
> > > there's no swap, unfreeze and continue. I do not see reliability
> > > problem there.
> >
> > The software suspend may be a part of your response to an imminent
> > power failure (UPS near empty). The number of retries available is possibly
> > limited.
>
> If there's no swap (and no hibernation partition), s2disk just will
> not work.

Yeah - an argument for not being swap centric in storing the image.

But there's more: if there's swap but it's not in the partition pointed
to by resume=, swsusp and uswsusp won't work either, will they? That's
another reliability issue.

> > I'd feel safer if hibernation by default wrote to a dedicated partition,
> > especially as modern practice is to make swap space smaller than RAM.
>
> It would be easy to have dedicated partition. But why waste space on
> it?

Because it gives you increased reliability. But it doesn't need to be a
dedicated partition - you can just have a file on a partition.

> Anyway, this debate here is "in what order should we do the swsusp
> actions". Dedicated partition/etc is for separate thread (please).

Yeah; if we keep this discussion up, we'll get to that issue too.

Regards,

Nigel

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/