Re: [PATCH v5 0/4] mm/sparse: Optimize memmap allocation during sparse_init()

From: Baoquan He
Date: Wed Jun 27 2018 - 19:39:49 EST


Hi Pavel,

On 06/27/18 at 01:47pm, Pavel Tatashin wrote:
> This work made me think why do we even have
> CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER ? This really should be the
> default behavior for all systems. Yet, it is enabled only on x86_64.
> We could clean up an already messy sparse.c if we removed this config,
> and enabled its path for all arches. We would not break anything
> because if we cannot allocate one large mmap_map we still fallback to
> allocating a page at a time the same as what happens when
> CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER=n.

Thanks for your idea.

Seems the common ARCHes all have ARCH_SPARSEMEM_ENABLE, such as x86,
arm/64, power, s390, mips, others don't have. For them, removing
CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER makes sense.

I will make a clean up patch to do this, but I can only test it on x86.
If test robot or other issues report issue on this clean up patch,
Andrew can help only pick the current 4 patches after updating, then
we can continue discussing the clean up patch. From the current code, it
should be OK to all ARCHes.

Thanks
Baoquan