[PATCH v3 1/3] x86/64: Make kernel text mapping always take one whole page table in early boot code
From: Baoquan He
Date: Wed Jan 04 2017 - 03:45:06 EST
In early boot code level2_kernel_pgt is used to map kernel text. And its
size varies with KERNEL_IMAGE_SIZE and fixed at compiling time. In fact
we can make it always take 512 entries of one whole page table, because
later function cleanup_highmap will clean up the unused entries. With the
help of this change kernel text mapping size can be decided at runtime
later, 512M if kaslr is disabled, 1G if kaslr is enabled.
Signed-off-by: Baoquan He <bhe@xxxxxxxxxx>
Acked-by: Kees Cook <keescook@xxxxxxxxxxxx>
---
arch/x86/include/asm/page_64_types.h | 3 ++-
arch/x86/kernel/head_64.S | 15 ++++++++-------
arch/x86/mm/init_64.c | 2 +-
3 files changed, 11 insertions(+), 9 deletions(-)
diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h
index 9215e05..62a20ea 100644
--- a/arch/x86/include/asm/page_64_types.h
+++ b/arch/x86/include/asm/page_64_types.h
@@ -56,8 +56,9 @@
* are fully set up. If kernel ASLR is configured, it can extend the
* kernel page table mapping, reducing the size of the modules area.
*/
+#define KERNEL_MAPPING_SIZE_EXT (1024 * 1024 * 1024)
#if defined(CONFIG_RANDOMIZE_BASE)
-#define KERNEL_IMAGE_SIZE (1024 * 1024 * 1024)
+#define KERNEL_IMAGE_SIZE KERNEL_MAPPING_SIZE_EXT
#else
#define KERNEL_IMAGE_SIZE (512 * 1024 * 1024)
#endif
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index b467b14..03bcb67 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -458,17 +458,18 @@ NEXT_PAGE(level3_kernel_pgt)
NEXT_PAGE(level2_kernel_pgt)
/*
- * 512 MB kernel mapping. We spend a full page on this pagetable
- * anyway.
+ * Kernel image size is limited to 512 MB. The kernel code+data+bss
+ * must not be bigger than that.
*
- * The kernel code+data+bss must not be bigger than that.
+ * We spend a full page on this pagetable anyway, so take the whole
+ * page here so that the kernel mapping size can be decided at runtime,
+ * 512M if no kaslr, 1G if kaslr enabled. Later cleanup_highmap will
+ * clean up those unused entries.
*
- * (NOTE: at +512MB starts the module area, see MODULES_VADDR.
- * If you want to increase this then increase MODULES_VADDR
- * too.)
+ * The module area starts after kernel mapping area.
*/
PMDS(0, __PAGE_KERNEL_LARGE_EXEC,
- KERNEL_IMAGE_SIZE/PMD_SIZE)
+ PTRS_PER_PMD)
NEXT_PAGE(level2_fixmap_pgt)
.fill 506,8,0
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index af85b68..45ef0ff 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -297,7 +297,7 @@ void __init init_extra_mapping_uc(unsigned long phys, unsigned long size)
void __init cleanup_highmap(void)
{
unsigned long vaddr = __START_KERNEL_map;
- unsigned long vaddr_end = __START_KERNEL_map + KERNEL_IMAGE_SIZE;
+ unsigned long vaddr_end = __START_KERNEL_map + KERNEL_MAPPING_SIZE_EXT;
unsigned long end = roundup((unsigned long)_brk_end, PMD_SIZE) - 1;
pmd_t *pmd = level2_kernel_pgt;
--
2.5.5