Re: [RFC PATCH] mm/sparse: remove sparse_buffer
From: Muchun Song
Date: Fri Apr 10 2026 - 02:06:27 EST
> On Apr 10, 2026, at 11:07, Muchun Song <muchun.song@xxxxxxxxx> wrote:
>
>
>
>> On Apr 9, 2026, at 23:10, Mike Rapoport <rppt@xxxxxxxxxx> wrote:
>>
>> Hi,
>>
>> On Thu, Apr 09, 2026 at 02:29:38PM +0200, David Hildenbrand (Arm) wrote:
>>> On 4/9/26 13:40, Muchun Song wrote:
>>>>
>>>>
>>>>> On Apr 8, 2026, at 21:40, David Hildenbrand (Arm) <david@xxxxxxxxxx> wrote:
>>>>>
>>>>> On 4/7/26 10:39, Muchun Song wrote:
>>>>>> The sparse_buffer was originally introduced in commit 9bdac9142407
>>>>>> ("sparsemem: Put mem map for one node together.") to allocate a
>>>>>> contiguous block of memory for all memmaps of a NUMA node.
>>>>>>
>>>>>> However, the original commit message did not clearly state the actual
>>>>>> benefits or the necessity of keeping all memmap areas strictly
>>>>>> contiguous for a given node.
>>>>>
>>>>> We don't want the memmap to be scattered around, given that it is one of
>>>>> the biggest allocations during boot.
>>>>>
>>>>> It's related to not turning too many memory blocks/sections
>>>>> un-offlinable I think.
>>>>>
>>>>> I always imagined that memblock would still keep these allocations close
>>>>> to each other. Can you verify if that is indeed true?
>>>>
>>>> You raised a very interesting point about whether memblock keeps
>>>> these allocations close to each other. I've done a thorough test
>>>> on a 16GB VM by printing the actual physical allocations.
>>
>> memblock always allocates in order, so if there are no other memblock
>> allocations between the calls to memmap_alloc(), all these allocations will
>> be together and they all will be coalesced to a single region in
>> memblock.reserved.
>>
>>>> I enabled the existing debug logs in arch/x86/mm/init_64.c to
>>>> trace the vmemmap_set_pmd allocations. Here is what really happens:
>>>>
>>>> When using vmemmap_alloc_block without sparse_buffer, the
>>>> memblock allocator allocates 2MB chunks. Because memblock
>>>> allocates top-down by default, the physical allocations look
>>>> like this:
>>>>
>>>> [ffe6475cc0000000-ffe6475cc01fffff] PMD -> [ff3cb082bfc00000-ff3cb082bfdfffff] on node 0
>>>> [ffe6475cc0200000-ffe6475cc03fffff] PMD -> [ff3cb082bfa00000-ff3cb082bfbfffff] on node 0
>>>> [ffe6475cc0400000-ffe6475cc05fffff] PMD -> [ff3cb082bf800000-ff3cb082bf9fffff] on node 0
>>
>> ...
>>
>>>> Notice that the physical chunks are strictly adjacent to each
>>>> other, but in descending order!
>>>>
>>>> So, they are NOT "scattered around" the whole node randomly.
>>>> Instead, they are packed densely back-to-back in a single
>>>> contiguous physical range (just mapped top-down in 2MB pieces).
>>>>
>>>> Because they are packed tightly together within the same
>>>> contiguous physical memory range, they will at most consume or
>>>> pollute the exact same number of memory blocks as a single
>>>> contiguous allocation (like sparse_buffer did). Therefore, this
>>>> will NOT turn additional memory blocks/sections into an
>>>> "un-offlinable" state.
>>>>
>>>> It seems we can safely remove the sparse buffer preallocation
>>>> mechanism, don't you think?
>>>
>>> Yes, what I suspected. Is there a performance implication when doing
>>> many individual memmap_alloc(), for example, on a larger system with
>>> many sections?
>>
>> memmap_alloc() will be slower than sparse_buffer_alloc(), allocating from
>> memblock is more involved that sparse_buffer_alloc(), but without
>> measurements it's hard to tell how much it'll affect overall sparse_init().
>
> I ran a test on a 256GB VM, and the results are as follows:
>
> With patch: 741,292 ns
> Without patch: 199,555 ns
>
> The performance is approximately 3.7x slower with the patch applied.
I also tested 512GB of data, and the results were roughly twice that
of 256GB, so for a 1TB machine, the memory allocation time is only a
few milliseconds. It seems we don’t need to worry about the 3.7x
performance drop.
>
> Thanks,
> Muchun
>
>>
>>> --
>>> Cheers,
>>>
>>> David
>>
>> --
>> Sincerely yours,
>> Mike.