Re: [PATCH 1/4] mm: page_alloc: reduce unnecessary binary search in memblock_next_valid_pfn()
From: Daniel Vacek
Date: Wed Mar 21 2018 - 11:05:07 EST
On Wed, Mar 21, 2018 at 1:28 PM, Jia He <hejianet@xxxxxxxxx> wrote:
>
> On 3/21/2018 6:14 PM, Daniel Vacek Wrote:
>>
>> On Wed, Mar 21, 2018 at 9:09 AM, Jia He <hejianet@xxxxxxxxx> wrote:
>>>
>>> Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns
>>> where possible") optimized the loop in memmap_init_zone(). But there is
>>> still some room for improvement. E.g. if pfn and pfn+1 are in the same
>>> memblock region, we can simply pfn++ instead of doing the binary search
>>> in memblock_next_valid_pfn.
>>
>> There is a
>> revert-mm-page_alloc-skip-over-regions-of-invalid-pfns-where-possible.patch
>> in -mm reverting b92df1de5d289c0b as it is fundamentally wrong by
>> design causing system panics on some machines with rare but still
>> valid mappings. Basically it skips valid pfns which are outside of
>> usable memory ranges (outside of memblock memory regions).
>
> Thanks for the infomation.
> quote from you patch description:
>>But given some specific memory mapping on x86_64 (or more generally
>> theoretically anywhere but on arm with CONFIG_HAVE_ARCH_PFN_VALID) > the
>> implementation also skips valid pfns which is plain wrong and causes >
>> 'kernel BUG at mm/page_alloc.c:1389!'
>
> Do you think memblock_next_valid_pfn can remain to be not reverted on arm64
> with CONFIG_HAVE_ARCH_PFN_VALID? Arm64 can benefit from this optimization.
I guess this is a question for maintainers. I am really not sure about
arm(64) but if this function is correct at least for arm(64) with arch
pfn_valid(), which is likely, then I'd say it should be moved
somewhere to arch/arm{,64}/mm/ (init.c maybe?) and #ifdefed properly.
Ard?
> Cheers,
> Jia