[PATCH v4 00/20] x86, boot: kaslr cleanup and 64bit kaslr support
From: Baoquan He
Date: Tue Mar 22 2016 - 03:32:51 EST
Previously a bug is reported that kdump didn't work when kaslr is enabled. During
discussing that bug fix, we found current kaslr has a limilation that it can
only randomize in 1GB region.
This is because in curent kaslr implementaion only physical address of kernel
loading is randomized. Then calculate the delta of physical address where
vmlinux was linked to load and where it is finally loaded. If delta is not
equal to 0, namely there's a new physical address where kernel is actually
decompressed, relocation handling need be done. Then delta is added to offset
of kernel symbol relocation, this makes the address of kernel text mapping move
delta long. Though in principle kernel can be randomized to any physical address,
kernel text mapping address space is limited and only 1G, namely as follows on
In one word, kernel text physical address and virtual address randomization is
coupled. This causes the limitation.
Then hpa and Vivek suggested we should change this. To decouple the physical
address and virtual address randomization of kernel text and let them work
separately. Then kernel text physical address can be randomized in region
[16M, 64T), and kernel text virtual address can be randomized in region
***Problems we need solved:
- For kernel boot from startup_32 case, only 0~4G identity mapping is built.
If kernel will be randomly put anywhere from 16M to 64T at most, the price
to build all region of identity mapping is too high. We need build the
identity mapping on demand, not covering all physical address space.
- Decouple the physical address and virtual address randomization of kernel
text and let them work separately.
- The 1st part is Yinghai's identity mapping building on demand patches.
This is used to solve the first problem mentioned above.
- The 2nd part is decoupling the physical address and virtual address
randomization of kernel text and letting them work separately patches
based on Yinghai's ident mapping patches.
- The 3rd part is some clean up patches which Yinghai found when he reviewed
my patches and the related code around.
This patchset went through several rounds of review.
- The first round can be found here:
- In 2nd round Yinghai made a big patchset including this kaslr fix and another
setup_data related fix. The link is here:
You can get the code from Yinghai's git branch:
- It only takes care of the kaslr related patches.
For reviewers it's better to discuss only one issue in one thread.
* I take off one patch as follows from Yinghai's because I think it's unnecessay.
- Patch 05/19 x86, kaslr: rename output_size to output_run_size
output_size is enough to represen the value:
output_len > run_size ? output_len : run_size
* I add Patch 04/19, it's a comment update patch. For other patches, I just
adjust patch log and do several places of change comparing with 2nd round.
Please check the change log under patch log of each patch for details.
* Adjust sequence of several patches to make review easier. It doesn't
- Made changes according to Kees's comments.
Add one patch 20/20 as Kees suggested to use KERNEL_IMAGE_SIZE as offset
max of virtual random, meanwhile clean up useless CONFIG_RANDOM_OFFSET_MAX
x86, kaslr: Use KERNEL_IMAGE_SIZE as the offset max for kernel virtual randomization
You can also get this patchset from my github:
Any comments about code changes, code comments, patch logs are welcome and
Baoquan He (9):
x86, kaslr: Fix a bug that relocation can not be handled when kernel
is loaded above 2G
x86, kaskr: Update the description for decompressor worst case
x86, kaslr: Introduce struct slot_area to manage randomization slot
x86, kaslr: Add two functions which will be used later
x86, kaslr: Introduce fetch_random_virt_offset to randomize the kernel
text mapping address
x86, kaslr: Randomize physical and virtual address of kernel
x86, kaslr: Add support of kernel physical address randomization above
x86, kaslr: Remove useless codes
x86, kaslr: Use KERNEL_IMAGE_SIZE as the offset max for kernel virtual
Yinghai Lu (11):
x86, kaslr: Remove not needed parameter for choose_kernel_location
x86, boot: Move compressed kernel to end of buffer before
x86, boot: Move z_extract_offset calculation to header.S
x86, boot: Fix run_size calculation
x86, kaslr: Clean up useless code related to run_size.
x86, kaslr: Get correct max_addr for relocs pointer
x86, kaslr: Consolidate mem_avoid array filling
x86, boot: Split kernel_ident_mapping_init to another file
x86, 64bit: Set ident_mapping for kaslr
x86, boot: Add checking for memcpy
x86, kaslr: Allow random address to be below loaded address
arch/x86/Kconfig | 57 +++----
arch/x86/boot/Makefile | 13 +-
arch/x86/boot/compressed/Makefile | 19 ++-
arch/x86/boot/compressed/aslr.c | 300 +++++++++++++++++++++++++--------
arch/x86/boot/compressed/head_32.S | 14 +-
arch/x86/boot/compressed/head_64.S | 15 +-
arch/x86/boot/compressed/misc.c | 89 +++++-----
arch/x86/boot/compressed/misc.h | 34 ++--
arch/x86/boot/compressed/misc_pgt.c | 93 ++++++++++
arch/x86/boot/compressed/mkpiggy.c | 28 +--
arch/x86/boot/compressed/string.c | 29 +++-
arch/x86/boot/compressed/vmlinux.lds.S | 1 +
arch/x86/boot/header.S | 22 ++-
arch/x86/include/asm/boot.h | 19 +++
arch/x86/include/asm/page.h | 5 +
arch/x86/include/asm/page_64_types.h | 5 +-
arch/x86/kernel/asm-offsets.c | 1 +
arch/x86/kernel/vmlinux.lds.S | 1 +
arch/x86/mm/ident_map.c | 74 ++++++++
arch/x86/mm/init_32.c | 3 -
arch/x86/mm/init_64.c | 74 +-------
arch/x86/tools/calc_run_size.sh | 42 -----
22 files changed, 605 insertions(+), 333 deletions(-)
create mode 100644 arch/x86/boot/compressed/misc_pgt.c
create mode 100644 arch/x86/mm/ident_map.c
delete mode 100644 arch/x86/tools/calc_run_size.sh