[RFC 1/1] x86/vmemmap: Add missing update of PML4 table / PML5 table entry
From: Gwan-gyeong Mun
Date: Fri Feb 14 2025 - 14:53:07 EST
when performing vmemmap populate, if the entry of the PML4 table/PML5 table
pointing to the target virtual address has never been updated, a page fault
occurs when the memset(start) called from the vmemmap_use_new_sub_pmd()
execution flow.
This fixes the problem of using the virtual address without updating the
entry in the PML4 table or PML5 table. But this is a temporary solution to
prevent page fault problems, and it requires improvement of the routine
that updates the missing entry in the PML4 table or PML5 table.
Fixes: faf1c0008a33 ("x86/vmemmap: optimize for consecutive sections in partial populated PMDs")
Signed-off-by: Gwan-gyeong Mun <gwan-gyeong.mun@xxxxxxxxx>
Cc: Oscar Salvador <osalvador@xxxxxxx>
Cc: Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx>
Cc: Byungchul Park <byungchul@xxxxxx>
Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
Cc: Andy Lutomirski <luto@xxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---
arch/x86/mm/init_64.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 01ea7c6df303..7a4d8cea1a2e 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -912,6 +912,7 @@ static void __meminit vmemmap_use_new_sub_pmd(unsigned long start, unsigned long
{
const unsigned long page = ALIGN_DOWN(start, PMD_SIZE);
+ sync_global_pgds(start, end - 1);
vmemmap_flush_unused_pmd();
/*
--
2.48.1