[PATCH v2] arm64: mm: reserve hugetlb CMA after numa_init

From: Barry Song
Date: Tue Jun 16 2020 - 18:21:49 EST


hugetlb_cma_reserve() is called at the wrong place. numa_init has not been
done yet. so all reserved memory will be located at node0.

Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma")
Cc: Matthias Brugger <matthias.bgg@xxxxxxxxx>
Acked-by: Roman Gushchin <guro@xxxxxx>
Signed-off-by: Barry Song <song.bao.hua@xxxxxxxxxxxxx>
---
-v2: add Fixes tag according to Matthias Brugger's comment

arch/arm64/mm/init.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index e631e6425165..41914b483d54 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -404,11 +404,6 @@ void __init arm64_memblock_init(void)
high_memory = __va(memblock_end_of_DRAM() - 1) + 1;

dma_contiguous_reserve(arm64_dma32_phys_limit);
-
-#ifdef CONFIG_ARM64_4K_PAGES
- hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
-#endif
-
}

void __init bootmem_init(void)
@@ -424,6 +419,11 @@ void __init bootmem_init(void)
min_low_pfn = min;

arm64_numa_init();
+
+#ifdef CONFIG_ARM64_4K_PAGES
+ hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
+#endif
+
/*
* Sparsemem tries to allocate bootmem in memory_present(), so must be
* done after the fixed reservations.
--
2.23.0