Re: [PATCH 2/3] arm64: mm: reserve hugetlb CMA after numa_init

From: Matthias Brugger
Date: Sun Jun 07 2020 - 16:14:46 EST




On 03/06/2020 05:22, Roman Gushchin wrote:
> On Wed, Jun 03, 2020 at 02:42:30PM +1200, Barry Song wrote:
>> hugetlb_cma_reserve() is called at the wrong place. numa_init has not been
>> done yet. so all reserved memory will be located at node0.
>>
>> Cc: Roman Gushchin <guro@xxxxxx>
>> Signed-off-by: Barry Song <song.bao.hua@xxxxxxxxxxxxx>
>
> Acked-by: Roman Gushchin <guro@xxxxxx>
>

When did this break or was it broken since the beginning?
In any case, could you provide a "Fixes" tag for it, so that it can easily be
backported to older releases.

Regards,
Matthias

> Thanks!
>
>> ---
>> arch/arm64/mm/init.c | 10 +++++-----
>> 1 file changed, 5 insertions(+), 5 deletions(-)
>>
>> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
>> index e42727e3568e..8f0e70ebb49d 100644
>> --- a/arch/arm64/mm/init.c
>> +++ b/arch/arm64/mm/init.c
>> @@ -458,11 +458,6 @@ void __init arm64_memblock_init(void)
>> high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
>>
>> dma_contiguous_reserve(arm64_dma32_phys_limit);
>> -
>> -#ifdef CONFIG_ARM64_4K_PAGES
>> - hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
>> -#endif
>> -
>> }
>>
>> void __init bootmem_init(void)
>> @@ -478,6 +473,11 @@ void __init bootmem_init(void)
>> min_low_pfn = min;
>>
>> arm64_numa_init();
>> +
>> +#ifdef CONFIG_ARM64_4K_PAGES
>> + hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
>> +#endif
>> +
>> /*
>> * Sparsemem tries to allocate bootmem in memory_present(), so must be
>> * done after the fixed reservations.
>> --
>> 2.23.0
>>
>>
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@xxxxxxxxxxxxxxxxxxx
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>