Re: [PATCH 1/2] x86/64: Make kernel text mapping always take one whole page table in early boot code
From: Baoquan He
Date: Thu Dec 08 2016 - 03:40:08 EST
On 12/08/16 at 02:24pm, Alexnader Kuleshov wrote:
> On 12-08-16, Baoquan He wrote:
> > In early boot code level2_kernel_pgt is used to map kernel text. And its
> > size varies according to KERNEL_IMAGE_SIZE and fixed at compiling time.
> > In fact we can make it always takes 512 entries of one whople page table,
> > because later function cleanup_highmap will clean up the unused entries.
> > With the help of this change kernel text mapping size can be decided at
> > runtime later, 512M if kaslr is disabled, 1G if kaslr is enabled.
>
> s/whople/whole
Will change. Thanks!
>
> > Signed-off-by: Baoquan He <bhe@xxxxxxxxxx>
> > ---
> > arch/x86/include/asm/page_64_types.h | 3 ++-
> > arch/x86/kernel/head_64.S | 15 ++++++++-------
> > arch/x86/mm/init_64.c | 2 +-
> > 3 files changed, 11 insertions(+), 9 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h
> > index 9215e05..62a20ea 100644
> > --- a/arch/x86/include/asm/page_64_types.h
> > +++ b/arch/x86/include/asm/page_64_types.h
> > @@ -56,8 +56,9 @@
> > * are fully set up. If kernel ASLR is configured, it can extend the
> > * kernel page table mapping, reducing the size of the modules area.
> > */
> > +#define KERNEL_MAPPING_SIZE_EXT (1024 * 1024 * 1024)
> > #if defined(CONFIG_RANDOMIZE_BASE)
> > -#define KERNEL_IMAGE_SIZE (1024 * 1024 * 1024)
> > +#define KERNEL_IMAGE_SIZE KERNEL_MAPPING_SIZE_EXT
> > #else
> > #define KERNEL_IMAGE_SIZE (512 * 1024 * 1024)
> > #endif
> > diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
> > index b4421cc..c4b40e7c9 100644
> > --- a/arch/x86/kernel/head_64.S
> > +++ b/arch/x86/kernel/head_64.S
> > @@ -453,17 +453,18 @@ NEXT_PAGE(level3_kernel_pgt)
> >
> > NEXT_PAGE(level2_kernel_pgt)
> > /*
> > - * 512 MB kernel mapping. We spend a full page on this pagetable
> > - * anyway.
> > + * Kernel image size is limited to 512 MB. The kernel code+data+bss
> > + * must not be bigger than that.
> > *
> > - * The kernel code+data+bss must not be bigger than that.
> > + * We spend a full page on this pagetable anyway, so take the whole
> > + * page here so that the kernel mapping size can be decided at runtime,
> > + * 512M if no kaslr, 1G if kaslr enabled. Later cleanup_highmap will
> > + * clean up those unused entries.
> > *
> > - * (NOTE: at +512MB starts the module area, see MODULES_VADDR.
> > - * If you want to increase this then increase MODULES_VADDR
> > - * too.)
> > + * The module area starts after kernel mapping area.
> > */
> > PMDS(0, __PAGE_KERNEL_LARGE_EXEC,
> > - KERNEL_IMAGE_SIZE/PMD_SIZE)
> > + PTRS_PER_PMD)
> >
> > NEXT_PAGE(level2_fixmap_pgt)
> > .fill 506,8,0
> > diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> > index 14b9dd7..e95b977 100644
> > --- a/arch/x86/mm/init_64.c
> > +++ b/arch/x86/mm/init_64.c
> > @@ -307,7 +307,7 @@ void __init init_extra_mapping_uc(unsigned long phys, unsigned long size)
> > void __init cleanup_highmap(void)
> > {
> > unsigned long vaddr = __START_KERNEL_map;
> > - unsigned long vaddr_end = __START_KERNEL_map + KERNEL_IMAGE_SIZE;
> > + unsigned long vaddr_end = __START_KERNEL_map + KERNEL_MAPPING_SIZE_EXT;
> > unsigned long end = roundup((unsigned long)_brk_end, PMD_SIZE) - 1;
> > pmd_t *pmd = level2_kernel_pgt;
> >
> > --
> > 2.5.5
> >