[RFC 2/2] fadump: Make fadump reserve_dump_area_start CMA aligned in case of holes
From: Ritesh Harjani (IBM)
Date: Tue Oct 08 2024 - 09:29:54 EST
Consider cma alignment into account while calculating base address for
fadump memory allocation. Physical memory ranges can have holes and
fadump_locate_reserve_mem() tries to find a suitable base address.
If CMA is enabled and fadump nocma is false then we need to consider
CMA_MIN_ALIGNMENT_BYTES for reserve_dump_area_start.
For e.g. in case of below memory layout, the most suitable base address
is 0x00000501000000 for crashkernel=4097M which is 16M (order 8) aligned
as expected by CMA_MIN_ALIGNMENT_BYTES on PPC64 during early boot
(when pageblock_order is still not initialized)
~ # cat /proc/iomem
00000000-1fffffff : System RAM
100000000-1ffffffff : System RAM
300000000-3ffffffff : System RAM
500200000-9001fffff : System RAM
~ # dmesg |grep -Ei "fadump|cma"
fadump: Reserved 4112MB of memory at 0x00000501000000 (System RAM: 25088MB)
fadump: Initialized 0x101000000 bytes cma area at 20496MB from 0x1010002a8 bytes of memory reserved for firmware-assisted dump
Kernel command line: root=/dev/vda1 console=ttyS0 nokaslr slub_max_order=0 norandmaps noreboot crashkernel=4097M fadump=on disable_radix=no debug_pagealloc=off
Memory: 21246656K/25690112K available (31872K kernel code, 4544K rwdata, 17280K rodata, 9216K init, 2212K bss, 218432K reserved, 4210688K cma-reserved)
Reported-by: Sourabh Jain <sourabhjain@xxxxxxxxxxxxx>
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@xxxxxxxxx>
---
arch/powerpc/kernel/fadump.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
index a612e7513a4f..15ea9c80bc03 100644
--- a/arch/powerpc/kernel/fadump.c
+++ b/arch/powerpc/kernel/fadump.c
@@ -512,6 +512,10 @@ static u64 __init fadump_locate_reserve_mem(u64 base, u64 size)
phys_addr_t mstart, mend;
int idx = 0;
u64 i, ret = 0;
+ unsigned long align = PAGE_SIZE;
+
+ if (IS_ENABLED(CONFIG_CMA) && !fw_dump.nocma)
+ align = CMA_MIN_ALIGNMENT_BYTES;
mrngs = reserved_mrange_info.mem_ranges;
for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
@@ -520,7 +524,7 @@ static u64 __init fadump_locate_reserve_mem(u64 base, u64 size)
i, mstart, mend, base);
if (mstart > base)
- base = PAGE_ALIGN(mstart);
+ base = ALIGN(mstart, align);
while ((mend > base) && ((mend - base) >= size)) {
if (!overlaps_reserved_ranges(base, base+size, &idx)) {
@@ -529,7 +533,7 @@ static u64 __init fadump_locate_reserve_mem(u64 base, u64 size)
}
base = mrngs[idx].base + mrngs[idx].size;
- base = PAGE_ALIGN(base);
+ base = ALIGN(base, align);
}
}
--
2.46.0