Re: [PATCH]Fix usemap for DISCONTIG/FLATMEM with not-aligned zone initilaization.

From: Mel Gorman
Date: Mon Apr 21 2008 - 06:12:49 EST


On (21/04/08 11:20), KAMEZAWA Hiroyuki didst pronounce:
> On Sat, 19 Apr 2008 02:25:56 +0900 (JST)
> kamezawa.hiroyu@xxxxxxxxxxxxxx wrote:
>
> > >What about something like the following? Instead of expanding the size of
> > >structures, it sanity checks input parameters. It touches a number of places
> > >because of an API change but it is otherwise straight-forward.
> > >
> > >Unfortunately, I do not have an IA-64 machine that can reproduce the problem
> > >to see if this still fixes it or not so a test as well as a review would be
> > >appreciated. What should happen is the machine boots but prints a warning
> > >about the unexpected PFN ranges. It boot-tested fine on a number of other
> > >machines (x86-32 x86-64 and ppc64).
> > >
> > ok, I'll test today if I have a chance. At least, I think I can test this
> > until Monday. but I have one concern (below)
> >
> I tested and found your patch doesn't work.
> It seems because all valid page struct is not initialized.

The fact I didn't calculate end_pfn properly as pointed out by Dave Hansen
didn't help either. If that was corrected, I'd be surprised if the patch
didn't work. If it is broken, it implies that arch-specific code is using
PFN ranges that do not contain valid memory - something I find surprising.

> (By pfn_valid(), a page struct is valid if it exists regardless of zones.)
>
> How about below ? I think this is simple.
> Tested and worked well.
>

This patch is fine. It checks the ranges passed in a less-invasive
fashion than the previous patch. Thanks

Acked-by: Mel Gorman <mel@xxxxxxxxx>

> ==
> usemap must be initialized only when pfn is within zone.
> If not, it corrupts memory.
>
> After intialization, usemap is used for only pfn in valid range.
> (We have to init memmap even in invalid range.)
>
> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx>
>
> ---
> mm/page_alloc.c | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
>
> Index: linux-2.6.25/mm/page_alloc.c
> ===================================================================
> --- linux-2.6.25.orig/mm/page_alloc.c
> +++ linux-2.6.25/mm/page_alloc.c
> @@ -2518,6 +2518,7 @@ void __meminit memmap_init_zone(unsigned
> struct page *page;
> unsigned long end_pfn = start_pfn + size;
> unsigned long pfn;
> + struct zone *z;
>
> for (pfn = start_pfn; pfn < end_pfn; pfn++) {
> /*
> @@ -2536,7 +2537,7 @@ void __meminit memmap_init_zone(unsigned
> init_page_count(page);
> reset_page_mapcount(page);
> SetPageReserved(page);
> -
> + z = page_zone(page);
> /*
> * Mark the block movable so that blocks are reserved for
> * movable at startup. This will force kernel allocations
> @@ -2546,7 +2547,9 @@ void __meminit memmap_init_zone(unsigned
> * the start are marked MIGRATE_RESERVE by
> * setup_zone_migrate_reserve()
> */
> - if ((pfn & (pageblock_nr_pages-1)))
> + if ((z->zone_start_pfn < pfn)
> + && (pfn < z->zone_start_pfn + z->spanned_pages)
> + && !(pfn & (pageblock_nr_pages-1)))
> set_pageblock_migratetype(page, MIGRATE_MOVABLE);
>
> INIT_LIST_HEAD(&page->lru);
> @@ -4460,6 +4463,8 @@ void set_pageblock_flags_group(struct pa
> pfn = page_to_pfn(page);
> bitmap = get_pageblock_bitmap(zone, pfn);
> bitidx = pfn_to_bitidx(zone, pfn);
> + VM_BUG_ON(pfn < zone->zone_start_pfn);
> + VM_BUG_ON(pfn >= zone->zone_start_pfn + zone->spanned_pages);
>
> for (; start_bitidx <= end_bitidx; start_bitidx++, value <<= 1)
> if (flags & value)
>
>

--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/