Re: [PATCH] sparsemem vmemmap: initialize memmap.
From: Andy Whitcroft
Date: Fri May 09 2008 - 06:32:59 EST
On Fri, May 09, 2008 at 08:38:56AM +0200, Heiko Carstens wrote:
> From: Heiko Carstens <heiko.carstens@xxxxxxxxxx>
>
> Trying to online a new memory section that was added via memory hotplug
> results in lots of messages of pages in bad page state.
> Reason is that the alloacted virtual memmap isn't initialized.
> This is only an issue for memory sections that get added after boot
> time since for all other memmaps the bootmem allocator was used which
> returns only initialized memory.
>
> I noticed this on s390 which has its private vmemmap_populate function
> without using callbacks to the common code. But as far as I can see the
> generic code has the same bug, so fix it just once.
>
> Cc: Andy Whitcroft <apw@xxxxxxxxxxxx>
> Cc: Christoph Lameter <clameter@xxxxxxx>
> Cc: Gerald Schaefer <gerald.schaefer@xxxxxxxxxx>
> Signed-off-by: Heiko Carstens <heiko.carstens@xxxxxxxxxx>
> ---
> mm/sparse-vmemmap.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> Index: linux-2.6/mm/sparse-vmemmap.c
> ===================================================================
> --- linux-2.6.orig/mm/sparse-vmemmap.c
> +++ linux-2.6/mm/sparse-vmemmap.c
> @@ -154,6 +154,6 @@ struct page * __meminit sparse_mem_map_p
> int error = vmemmap_populate(map, PAGES_PER_SECTION, nid);
> if (error)
> return NULL;
> -
> + memset(map, 0, PAGES_PER_SECTION * sizeof(struct page));
> return map;
> }
The normal expectation is that all allocations are made using
vmemmap_alloc_block() which allocates from the appropriate place. Once
the buddy is up and available it uses:
struct page *page = alloc_pages_node(node,
GFP_KERNEL | __GFP_ZERO, get_order(size));
to get the memory so it should all be zero'd. So I would expect all
existing users to be covered by that? Can you not simply use __GFP_ZERO
for your allocations or use vmemmap_alloc_block() ?
-apw
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/