Re: [PATCH 3/4] PM/Hibernate: Use memory allocations to freememory (rev. 2)
From: Pavel Machek
Date: Mon May 04 2009 - 05:28:35 EST
On Sun 2009-05-03 18:22:54, Rafael J. Wysocki wrote:
> On Sunday 03 May 2009, Wu Fengguang wrote:
> > On Sun, May 03, 2009 at 02:24:20AM +0200, Rafael J. Wysocki wrote:
> > > From: Rafael J. Wysocki <rjw@xxxxxxx>
> > >
> > > Modify the hibernation memory shrinking code so that it will make
> > > memory allocations to free memory instead of using an artificial
> > > memory shrinking mechanism for that. Remove the shrinking of
> > > memory from the suspend-to-RAM code, where it is not really
> > > necessary. Finally, remove the no longer used memory shrinking
> > > functions from mm/vmscan.c .
> > >
> > > [rev. 2: Use the existing memory bitmaps for marking preallocated
> > > image pages and use swsusp_free() from releasing them, introduce
> > > GFP_IMAGE, add comments describing the memory shrinking strategy.]
> > >
> > > Signed-off-by: Rafael J. Wysocki <rjw@xxxxxxx>
> > > ---
> > > kernel/power/main.c | 20 ------
> > > kernel/power/snapshot.c | 132 +++++++++++++++++++++++++++++++++-----------
> > > mm/vmscan.c | 142 ------------------------------------------------
> > > 3 files changed, 101 insertions(+), 193 deletions(-)
> > >
> > > Index: linux-2.6/kernel/power/snapshot.c
> > > ===================================================================
> > > --- linux-2.6.orig/kernel/power/snapshot.c
> > > +++ linux-2.6/kernel/power/snapshot.c
> > > @@ -1066,41 +1066,97 @@ void swsusp_free(void)
> > > buffer = NULL;
> > > }
> > >
> > > +/* Helper functions used for the shrinking of memory. */
> > > +
> > > +#ifdef CONFIG_HIGHMEM
> > > +#define GFP_IMAGE (GFP_KERNEL | __GFP_HIGHMEM | __GFP_NO_OOM_KILL)
> > > +#else
> > > +#define GFP_IMAGE (GFP_KERNEL | __GFP_NO_OOM_KILL)
> > > +#endif
> >
> > The CONFIG_HIGHMEM test is not necessary: __GFP_HIGHMEM is always defined.
> >
> > > +#define SHRINK_BITE 10000
> >
> > This is ~40MB. A full scan of (for example) 8G pages will be time
> > consuming, not to mention we have to do it 2*(8G-500M)/40M = 384 times!
> >
> > Can we make it a LONG_MAX?
>
> No, I don't think so. The problem is the number of pages we'll need to copy
> is generally shrinking as we allocate memory, so we can't do that in one shot.
>
> We can make it a greater number, but I don't really think it would be a good
> idea to make it greater than 100 MB.
Well, even 100MB is quite big: on 128MB machine, that will probably
mean freeing all the memory (instead of "as much as needed"). And that
memory needs to go to disk, so it will be slow.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/