I was bisecting a problem on 64bit where any attempt to cause a crash kernel to
boot would hang. The bisect ended up on commit 722bc6b (x86/mm: Fix the size
calculation of mapping tables) and somehow, looking at the calling function and
the ranges printed on boot, I think the calculations should only be done in the
32bit case.
On 64bit:
[ 0.000000] init_memory_mapping: [mem 0x00000000-0x77e87fff]
[ 0.000000] [mem 0x00000000-0x77dfffff] page 2M
[ 0.000000] [mem 0x77e00000-0x77e87fff] page 4k
Attached patch would fix this if you agree with it. Thanks.
-Stefan
From 6b679d1af20656929c0e829f29eed60b0a86a74f Mon Sep 17 00:00:00 2001
From: Stefan Bader <stefan.bader@xxxxxxxxxxxxx>
Date: Fri, 13 Jul 2012 15:16:33 +0200
Subject: [PATCH] x86/mm: Limit 2/4M size calculation to x86_32
commit 722bc6b (x86/mm: Fix the size calculation of mapping tables)
did modify the extra space calculation for mapping tables in order
to make up for the first 2/4M memory range using 4K pages.
However this setup is only used when compiling for 32bit. On 64bit
there is only the trailing area of 4K pages (which is already added).
The code was already adapted once for things went wrong on a 8TB
machine (bd2753b x86/mm: Only add extra pages count for the first memory
range during pre-allocation early page table space), but it looks a bit
like it currently would overdo things for 64bit.
I only noticed while bisecting for the reason I could not make a crash
kernel boot (which ended up on this patch).
Signed-off-by: Stefan Bader <stefan.bader@xxxxxxxxxxxxx>
Cc: WANG Cong <xiyou.wangcong@xxxxxxxxx>
Cc: Yinghai Lu <yinghai@xxxxxxxxxx>
Cc: Tejun Heo <tj@xxxxxxxxxx>
---
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index bc4e9d8..636bbfd 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -60,10 +60,11 @@ static void __init find_early_table_space(struct map_range
*mr, unsigned long en
extra = end - ((end>>PMD_SHIFT) << PMD_SHIFT);
#ifdef CONFIG_X86_32
extra += PMD_SIZE;
-#endif
+
/* The first 2/4M doesn't use large pages. */
if (mr->start < PMD_SIZE)
extra += mr->end - mr->start;
+#endif
ptes = (extra + PAGE_SIZE - 1) >> PAGE_SHIFT;
} else