[PATCH v2 2/3] x86/mm: simplify calculation of max_pfn_mapped

From: Brendan Jackman

Date: Sun May 03 2026 - 09:05:54 EST


The phys_*_init()s return the "last physical address mapped". The exact
definition of this is pretty fiddly, but only in these conditions:

1. There is a mismatch between the alignment of the requested range and
the page sizes allowed by page_size_mask

2. The range ends in a region that is not mapped according to
e820.

3. The range ends in a region that was already mapped (note this case is
particularly fiddly because the return value depends on what level
the existing mapping is at. This is probably a bug, see [0] for
discussion).

Luckily, init_memory_mapping() avoids all these conditions. In that
case, the return value is just paddr_end. And that value is already
present, no need to depend on the confusing return value.

[0]: https://lore.kernel.org/all/84b2e7a3-7115-45fe-89ff-db8ee46729f2@xxxxxxxxx/

Signed-off-by: Brendan Jackman <jackmanb@xxxxxxxxxx>
---
arch/x86/mm/init.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index ae3e9e0820153..1a6a6fc700bb5 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -544,10 +544,11 @@ void __ref init_memory_mapping(unsigned long start,
memset(mr, 0, sizeof(mr));
nr_range = split_mem_range(mr, 0, start, end);

- for (i = 0; i < nr_range; i++)
- paddr_last = kernel_physical_mapping_init(mr[i].start, mr[i].end,
- mr[i].page_size_mask,
- prot);
+ for (i = 0; i < nr_range; i++) {
+ kernel_physical_mapping_init(mr[i].start, mr[i].end,
+ mr[i].page_size_mask, prot);
+ paddr_last = mr[i].end;
+ }

add_pfn_range_mapped(start >> PAGE_SHIFT, paddr_last >> PAGE_SHIFT);
}

--
2.51.2