[PATCH v4 2/3] x86/64/KASLR: Change kernel mapping size to 1G unconditionally
From: Baoquan He
Date: Thu Feb 02 2017 - 07:55:03 EST
The current KASLR changes kernel mapping size from 512M to 1G as long
as CONFIG_RANDOMIZE_BASE is enabled, though "nokaslr" kernel option is
added. This is buggy. When people specify "nokaslr", whether KASLR code
compiled in or not, they expect to see no KASLR change at all, including
the size of kernel mapping area and module mapping area.
Kees explained the only reason he made kernel mapping size as an option
was for kernel module space. It wasn't clear at the time if enough space
remained for modules in all use-cases.
Boris suggested we can make the kernel mapping 1G unconditionally since
practically kaslr will be enabled on the majority of the systems anyway,
so we will have 1G text mapping size on most. And he further pointed out
that: [Quote his words as follows]
"""""
Realistically, on a typical bigger machine, the modules take up
something like <10M:
$ lsmod | awk '{ sum +=$2 } END { print sum }'
7188480
so I'm not really worried if we reduce it by default to 1G. Besides, the
reduction has been there for a while now - since CONFIG_RANDOMIZE_BASE -
so we probably would've heard complaints already...
"""""
Hence in this patch change kernel mapping size to 1G unconditionally.
Signed-off-by: Baoquan He <bhe@xxxxxxxxxx>
Suggested-by: Borislav Petkov <bp@xxxxxxx>
---
arch/x86/include/asm/page_64_types.h | 9 +--------
arch/x86/kernel/head_64.S | 10 ++++------
2 files changed, 5 insertions(+), 14 deletions(-)
diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h
index 24c9098..4120cfe 100644
--- a/arch/x86/include/asm/page_64_types.h
+++ b/arch/x86/include/asm/page_64_types.h
@@ -57,15 +57,8 @@
/*
* Kernel mapping size is limited to 1GiB due to the fixmap living in the
- * next 1GiB (see level2_kernel_pgt in arch/x86/kernel/head_64.S). Use
- * 512MiB by default, leaving 1.5GiB for modules once the page tables
- * are fully set up. If kernel ASLR is configured, it can extend the
- * kernel page table mapping, reducing the size of the modules area.
+ * next 1GiB (see level2_kernel_pgt in arch/x86/kernel/head_64.S).
*/
-#if defined(CONFIG_RANDOMIZE_BASE)
#define KERNEL_MAPPING_SIZE (1024 * 1024 * 1024)
-#else
-#define KERNEL_MAPPING_SIZE (512 * 1024 * 1024)
-#endif
#endif /* _ASM_X86_PAGE_64_DEFS_H */
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index cdfe4dc..3cc6dc6 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -458,14 +458,12 @@ NEXT_PAGE(level3_kernel_pgt)
NEXT_PAGE(level2_kernel_pgt)
/*
- * 512 MB kernel mapping. We spend a full page on this pagetable
- * anyway.
+ * 1 GB kernel mapping. We spend a full page on this pagetable.
*
- * The kernel code+data+bss must not be bigger than that.
+ * The kernel image size including code+data+bss must not be bigger
+ * than this.
*
- * (NOTE: at +512MB starts the module area, see MODULES_VADDR.
- * If you want to increase this then increase MODULES_VADDR
- * too.)
+ * (NOTE: at +1GB starts the module area, see MODULES_VADDR.
*/
PMDS(0, __PAGE_KERNEL_LARGE_EXEC,
KERNEL_MAPPING_SIZE/PMD_SIZE)
--
2.5.5