[PATCH 5.7 020/376] arm64/kernel: Fix range on invalidating dcache for boot page tables
From: Greg Kroah-Hartman
Date: Fri Jun 19 2020 - 11:21:01 EST
From: Gavin Shan <gshan@xxxxxxxxxx>
[ Upstream commit 9d2d75ede59bc1edd8561f2ee9d4702a5ea0ae30 ]
Prior to commit 8eb7e28d4c642c31 ("arm64/mm: move runtime pgds to
rodata"), idmap_pgd_dir, tramp_pg_dir, reserved_ttbr0, swapper_pg_dir,
and init_pg_dir were contiguous at the end of the kernel image. The
maintenance at the end of __create_page_tables assumed these were
contiguous, and affected everything from the start of idmap_pg_dir
to the end of init_pg_dir.
That commit moved all but init_pg_dir into the .rodata section, with
other data placed between idmap_pg_dir and init_pg_dir, but did not
update the maintenance. Hence the maintenance is performed on much
more data than necessary (but as the bootloader previously made this
clean to the PoC there is no functional problem).
As we only alter idmap_pg_dir, and init_pg_dir, we only need to perform
maintenance for these. As the other dirs are in .rodata, the bootloader
will have initialised them as expected and cleaned them to the PoC. The
kernel will initialize them as necessary after enabling the MMU.
This patch reworks the maintenance to only cover the idmap_pg_dir and
init_pg_dir to avoid this unnecessary work.
Signed-off-by: Gavin Shan <gshan@xxxxxxxxxx>
Reviewed-by: Mark Rutland <mark.rutland@xxxxxxx>
Link: https://lore.kernel.org/r/20200427235700.112220-1-gshan@xxxxxxxxxx
Signed-off-by: Will Deacon <will@xxxxxxxxxx>
Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>
---
arch/arm64/include/asm/pgtable.h | 1 +
arch/arm64/kernel/head.S | 12 +++++++++---
arch/arm64/kernel/vmlinux.lds.S | 1 +
3 files changed, 11 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 538c85e62f86..25f56df7ed9a 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -457,6 +457,7 @@ extern pgd_t init_pg_dir[PTRS_PER_PGD];
extern pgd_t init_pg_end[];
extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
extern pgd_t idmap_pg_dir[PTRS_PER_PGD];
+extern pgd_t idmap_pg_end[];
extern pgd_t tramp_pg_dir[PTRS_PER_PGD];
extern void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd);
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 57a91032b4c2..32f5ecbec0ea 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -394,13 +394,19 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
/*
* Since the page tables have been populated with non-cacheable
- * accesses (MMU disabled), invalidate the idmap and swapper page
- * tables again to remove any speculatively loaded cache lines.
+ * accesses (MMU disabled), invalidate those tables again to
+ * remove any speculatively loaded cache lines.
*/
+ dmb sy
+
adrp x0, idmap_pg_dir
+ adrp x1, idmap_pg_end
+ sub x1, x1, x0
+ bl __inval_dcache_area
+
+ adrp x0, init_pg_dir
adrp x1, init_pg_end
sub x1, x1, x0
- dmb sy
bl __inval_dcache_area
ret x28
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 497f9675071d..94402aaf5f5c 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -139,6 +139,7 @@ SECTIONS
idmap_pg_dir = .;
. += IDMAP_DIR_SIZE;
+ idmap_pg_end = .;
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
tramp_pg_dir = .;
--
2.25.1