Re: Suspend 2 merge: 50/51: Device mapper support.
From: Nigel Cunningham
Date: Fri Dec 03 2004 - 15:12:27 EST
Hi.
On Sat, 2004-12-04 at 04:47, Alasdair G Kergon wrote:
> On Fri, Dec 03, 2004 at 09:08:18AM +1100, Nigel Cunningham wrote:
> > My mistake. The code has been improved and I haven't reverted some of
> > the changes in drivers/md to match. I'll do that and make the two
> > exports that are needed (dm_io_get and dm_io_put) into an
> > include/linux/dm.h.
>
> It would be device-mapper.h - or the whole of dm-io.h.
> The vmalloc comment also looks wrong BTW - it's extra kmalloc
> GFP_KERNEL memory you're asking for here.
>
> I'd like to understand why you need to call those functions
> and how it integrates with the rest of what you're doing:
> do you have calls to other dm functions in other patches?
>
> Or is this particular change optional, but you have test results
> showing it to be desirable or necessary in certain cases, maybe
> indicating a shortcoming within the dm-io code which should be
> addressed instead?
>
> Alasdair
Okay.
Pavel's method of storing the image has an inherent limit: you have to
get a consistent image, so you have to make an atomic copy of memory.
This implies a maximum image size of half of memory. If you want to save
all of your memory when suspending to disk, you simply can't. In order
to get around this limitation, I'm saving the image in two parts: LRU
pages and 'the rest'. The saving is done using highlevel routines (bio
calls, rather than our own polling or whatever drivers), but LRU pages
are guaranteed not to change while we're writing the image. We can then
use the space occupied by our (saved) LRU pages for the atomic copy.
Prior to saving any of the pages, we generate all of the image meta
data, including (of course) the list of pages in both parts of the
image. But saving the LRU pages may also result in extra slab pages
being allocated, which could make our image inconsistent (they wouldn't
get saved, since we've already calculated which pages to save). To get
around this problem, we use a memory pool, to and from which all page
allocs are satisfied/returned while suspending. The pages in this pool
are unconditionally saved, thereby removing the potentiality of having
an inconsistent image.
Now the question arises, "How big should the memory pool be?". The
answer depends on how much I/O we intend to have on the fly at once
(among other things). If, therefore, we we're using DM crypt to do our
I/O, it will help if we can either get it to allocate the memory it will
need prior to us preparing the metadata, or get from it an amount of
memory to include in the pool, given the maximum amount of I/O we intend
to have on the fly at once.
--
Nigel Cunningham
Pastoral Worker
Christian Reformed Church of Tuggeranong
PO Box 1004, Tuggeranong, ACT 2901
You see, at just the right time, when we were still powerless, Christ
died for the ungodly. -- Romans 5:6
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/