[PATCH v6 1/3] x86/mm: PUD VA support for physical mapping (x86_64)

From: Kees Cook
Date: Wed May 25 2016 - 19:05:44 EST


From: Thomas Garnier <thgarnie@xxxxxxxxxx>

Minor change that allows early boot physical mapping of PUD level virtual
addresses. The current implementation expects the virtual address to be
PUD aligned. For KASLR memory randomization, we need to be able to
randomize the offset used on the PUD table.

It has no impact on current usage.

Signed-off-by: Thomas Garnier <thgarnie@xxxxxxxxxx>
Signed-off-by: Kees Cook <keescook@xxxxxxxxxxxx>
---
arch/x86/mm/init_64.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index bce2e5d9edd4..f205f39bd808 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -454,10 +454,10 @@ phys_pud_init(pud_t *pud_page, unsigned long addr, unsigned long end,
{
unsigned long pages = 0, next;
unsigned long last_map_addr = end;
- int i = pud_index(addr);
+ int i = pud_index((unsigned long)__va(addr));

for (; i < PTRS_PER_PUD; i++, addr = next) {
- pud_t *pud = pud_page + pud_index(addr);
+ pud_t *pud = pud_page + pud_index((unsigned long)__va(addr));
pmd_t *pmd;
pgprot_t prot = PAGE_KERNEL;

--
2.6.3