Re: [PATCH v3 24/29] arch, mm: consolidate initialization of SPARSE memory model
From: Mike Rapoport
Date: Wed Feb 25 2026 - 11:31:08 EST
Hello Ritesh,
On Wed, Feb 25, 2026 at 09:00:35AM +0530, Ritesh Harjani wrote:
> Mike Rapoport <rppt@xxxxxxxxxx> writes:
>
> > From: "Mike Rapoport (Microsoft)" <rppt@xxxxxxxxxx>
> >
> > Every architecture calls sparse_init() during setup_arch() although the
> > data structures created by sparse_init() are not used until the
> > initialization of the core MM.
> >
> > Beside the code duplication, calling sparse_init() from architecture
> > specific code causes ordering differences of vmemmap and HVO initialization
> > on different architectures.
> >
> > Move the call to sparse_init() from architecture specific code to
> > free_area_init() to ensure that vmemmap and HVO initialization order is
> > always the same.
> >
>
> Hello Mike,
>
> [ 0.000000][ T0] ------------[ cut here ]------------
> [ 0.000000][ T0] WARNING: arch/powerpc/include/asm/io.h:879 at virt_to_phys+0x44/0x1b8, CPU#0: swapper/0
> [ 0.000000][ T0] Modules linked in:
> [ 0.000000][ T0] CPU: 0 UID: 0 PID: 0 Comm: swapper Not tainted 6.19.0-12139-gc57b1c00145a #31 PREEMPT
> [ 0.000000][ T0] Hardware name: IBM pSeries (emulated by qemu) POWER10 (architected) 0x801200 0xf000006 of:SLOF,git-ee03ae pSeries
> [ 0.000000][ T0] NIP: c000000000601584 LR: c000000004075de4 CTR: c000000000601548
> [ 0.000000][ T0] REGS: c000000004d1f870 TRAP: 0700 Not tainted (6.19.0-12139-gc57b1c00145a)
> [ 0.000000][ T0] MSR: 8000000000021033 <SF,ME,IR,DR,RI,LE> CR: 48022448 XER: 20040000
> [ 0.000000][ T0] CFAR: c0000000006016c4 IRQMASK: 1
> [ 0.000000][ T0] GPR00: c000000004075dd4 c000000004d1fb10 c00000000304bb00 c000000180000000
> [ 0.000000][ T0] GPR04: 0000000000000009 0000000000000009 c000000004ec94a0 0000000000000000
> [ 0.000000][ T0] GPR08: 0000000000018000 0000000000000001 c000000004921280 0000000048022448
> [ 0.000000][ T0] GPR12: c000000000601548 c000000004fe0000 0000000000000004 0000000000000004
> [ 0.000000][ T0] GPR16: 000000000287fb08 0000000000000060 0000000000000002 0000000002831750
> [ 0.000000][ T0] GPR20: 0000000002831778 fffffffffffffffd c000000004d78050 00000000051cbb00
> [ 0.000000][ T0] GPR24: 0000000005a40008 c000000000000000 c000000000400000 0000000000000100
> [ 0.000000][ T0] GPR28: c000000004d78050 0000000000000000 c000000004ecd4a8 0000000000000001
> [ 0.000000][ T0] NIP [c000000000601584] virt_to_phys+0x44/0x1b8
> [ 0.000000][ T0] LR [c000000004075de4] alloc_bootmem+0x144/0x1a8
> [ 0.000000][ T0] Call Trace:
> [ 0.000000][ T0] [c000000004d1fb50] [c000000004075dd4] alloc_bootmem+0x134/0x1a8
> [ 0.000000][ T0] [c000000004d1fba0] [c000000004075fac] __alloc_bootmem_huge_page+0x164/0x230
> [ 0.000000][ T0] [c000000004d1fbe0] [c000000004030bc4] alloc_bootmem_huge_page+0x44/0x138
> [ 0.000000][ T0] [c000000004d1fc10] [c000000004076e48] hugetlb_hstate_alloc_pages+0x350/0x5ac
> [ 0.000000][ T0] [c000000004d1fd30] [c0000000040782f0] hugetlb_bootmem_alloc+0x15c/0x19c
> [ 0.000000][ T0] [c000000004d1fd70] [c00000000406d7b4] mm_core_init_early+0x7c/0xdf4
> [ 0.000000][ T0] [c000000004d1ff30] [c000000004011d84] start_kernel+0xac/0xc58
> [ 0.000000][ T0] [c000000004d1ffe0] [c00000000000e99c] start_here_common+0x1c/0x20
> [ 0.000000][ T0] Code: 6129ffff 792907c6 6529ffff 6129ffff 7c234840 40810018 3d2201e8 3929a7a8 e9290000 7c291840 41810044 3be00001 <0b1f0000> 3d20bfff 6129ffff 792907c6
>
>
> I think this is happening because, now in mm_core_early_init(), the
> order of initialization between hugetlb_bootmem_alloc() and
> free_area_init() is reversed. Since free_area_init() -> sparse_init()
> is responsible for setting SECTIONS and vmemmap area.
>
> Then in alloc_bootmem() (from hugetlb_bootmem_alloc() path), it uses virt_to_phys(m)...
>
> /*
> * For pre-HVO to work correctly, pages need to be on
> * the list for the node they were actually allocated
> * from. That node may be different in the case of
> * fallback by memblock_alloc_try_nid_raw. So,
> * extract the actual node first.
> */
> if (m)
> listnode = early_pfn_to_nid(PHYS_PFN(virt_to_phys(m)));
>
>
> ... virt_to_phys on powerpc uses:
>
> static inline unsigned long virt_to_phys(const volatile void * address)
> {
> WARN_ON(IS_ENABLED(CONFIG_DEBUG_VIRTUAL) && !virt_addr_valid(address));
>
> return __pa((unsigned long)address);
> }
>
> #define virt_addr_valid(vaddr) ({ \
> unsigned long _addr = (unsigned long)vaddr; \
> _addr >= PAGE_OFFSET && _addr < (unsigned long)high_memory && \
> pfn_valid(virt_to_pfn((void *)_addr)); \
> })
>
>
> I think the above warning in dmesg gets printed from above WARN_ON, i.e.
> because pfn_valid() is false, since we haven't done sparse_init() yet.
Yes, I agree.
> So, what I wanted to check was - do you think instead of virt_to_phys(), we
> could directly use __pa() here() in mm/hugetlb.c, since these are
> memblock alloc addresses? i.e.:
>
> // alloc_bootmem():
> - listnode = early_pfn_to_nid(PHYS_PFN(virt_to_phys(m)));
> + listnode = early_pfn_to_nid(PHYS_PFN(__pa(m)));
>
> // __alloc_bootmem_huge_page():
> - memblock_reserved_mark_noinit(virt_to_phys((void *)m + PAGE_SIZE),
> + memblock_reserved_mark_noinit(__pa((void *)m + PAGE_SIZE),
It surely will work for powerpc :)
I checked the definitions of __pa() on other architectures and it seems the
safest and the easiest way to fix this.
Would you send a formal patch?
> Thoughts?
>
> -ritesh
--
Sincerely yours,
Mike.