[PATCH V2] arm64/mm: Create level specific section mappings in map_range()
From: Anshuman Khandual
Date: Mon Mar 10 2025 - 02:28:35 EST
Currently PMD section mapping mask i.e PMD_TYPE_SECT is used while creating
section mapping at all page table levels except the last level. This works
fine as the section mapping masks are exactly the same (0x1UL) for all page
table levels.
This will change in the future with D128 page tables that have unique skip
level values (SKL) required for creating section mapping at different page
table levels. Hence use page table level specific section mapping macros
instead of the common PMD_TYPE_SECT. While here also ensure that a section
mapping is only created on page table levels which could support that on a
given page size configuration otherwise fall back to create table entries.
Cc: Catalin Marinas <catalin.marinas@xxxxxxx>
Cc: Will Deacon <will@xxxxxxxxxx>
Cc: Ryan Roberts <ryan.roberts@xxxxxxx>
Cc: Ard Biesheuvel <ardb@xxxxxxxxxx>
Cc: linux-kernel@xxxxxxxxxxxxxxx
Cc: linux-arm-kernel@xxxxxxxxxxxxxxxxxxx
Signed-off-by: Anshuman Khandual <anshuman.khandual@xxxxxxx>
---
This patch applies on v6.14-rc6
Changes in V2:
- Dropped PGD_TYPE_SECT macro and its instance from map_range()
- Create table entries on levels where section mapping is not possible
Changes in V1:
https://lore.kernel.org/all/20250303041834.2796751-1-anshuman.khandual@xxxxxxx/
arch/arm64/kernel/pi/map_range.c | 38 +++++++++++++++++++++++++++++---
1 file changed, 35 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kernel/pi/map_range.c b/arch/arm64/kernel/pi/map_range.c
index 2b69e3beeef8..25a70c675c4d 100644
--- a/arch/arm64/kernel/pi/map_range.c
+++ b/arch/arm64/kernel/pi/map_range.c
@@ -11,6 +11,22 @@
#include "pi.h"
+static bool sect_supported(int level)
+{
+ switch (level) {
+ case -1:
+ return false;
+ case 0:
+ if (IS_ENABLED(CONFIG_ARM64_16K_PAGES) ||
+ IS_ENABLED(CONFIG_ARM64_64K_PAGES))
+ return false;
+ else
+ return true;
+ default:
+ return true;
+ }
+}
+
/**
* map_range - Map a contiguous range of physical pages into virtual memory
*
@@ -44,13 +60,29 @@ void __init map_range(u64 *pte, u64 start, u64 end, u64 pa, pgprot_t prot,
* Set the right block/page bits for this level unless we are
* clearing the mapping
*/
- if (protval)
- protval |= (level < 3) ? PMD_TYPE_SECT : PTE_TYPE_PAGE;
+ if (protval && sect_supported(level)) {
+ switch (level) {
+ case 3:
+ protval |= PTE_TYPE_PAGE;
+ break;
+ case 2:
+ protval |= PMD_TYPE_SECT;
+ break;
+ case 1:
+ protval |= PUD_TYPE_SECT;
+ break;
+ case 0:
+ protval |= P4D_TYPE_SECT;
+ break;
+ default:
+ break;
+ }
+ }
while (start < end) {
u64 next = min((start | lmask) + 1, PAGE_ALIGN(end));
- if (level < 3 && (start | next | pa) & lmask) {
+ if ((level < 3 && (start | next | pa) & lmask) || !sect_supported(level)) {
/*
* This chunk needs a finer grained mapping. Create a
* table mapping if necessary and recurse.
--
2.25.1