Re: [PATCH v4 1/3] mm/page_alloc: Optimize free_contig_range()
From: Muhammad Usama Anjum
Date: Mon Mar 30 2026 - 12:42:21 EST
On 30/03/2026 3:30 pm, Vlastimil Babka (SUSE) wrote:
> On 3/27/26 13:57, Muhammad Usama Anjum wrote:
>> From: Ryan Roberts <ryan.roberts@xxxxxxx>
>>
>> Decompose the range of order-0 pages to be freed into the set of largest
>> possible power-of-2 size and aligned chunks and free them to the pcp or
>> buddy. This improves on the previous approach which freed each order-0
>> page individually in a loop. Testing shows performance to be improved by
>> more than 10x in some cases.
>>
>> Since each page is order-0, we must decrement each page's reference
>> count individually and only consider the page for freeing as part of a
>> high order chunk if the reference count goes to zero. Additionally
>> free_pages_prepare() must be called for each individual order-0 page
>> too, so that the struct page state and global accounting state can be
>> appropriately managed. But once this is done, the resulting high order
>> chunks can be freed as a unit to the pcp or buddy.
>>
>> This significantly speeds up the free operation but also has the side
>> benefit that high order blocks are added to the pcp instead of each page
>> ending up on the pcp order-0 list; memory remains more readily available
>> in high orders.
>>
>> vmalloc will shortly become a user of this new optimized
>> free_contig_range() since it aggressively allocates high order
>> non-compound pages, but then calls split_page() to end up with
>> contiguous order-0 pages. These can now be freed much more efficiently.
>>
>> The execution time of the following function was measured in a server
>> class arm64 machine:
>>
>> static int page_alloc_high_order_test(void)
>> {
>> unsigned int order = HPAGE_PMD_ORDER;
>> struct page *page;
>> int i;
>>
>> for (i = 0; i < 100000; i++) {
>> page = alloc_pages(GFP_KERNEL, order);
>> if (!page)
>> return -1;
>> split_page(page, order);
>> free_contig_range(page_to_pfn(page), 1UL << order);
>> }
>>
>> return 0;
>> }
>>
>> Execution time before: 4097358 usec
>> Execution time after: 729831 usec
>>
>> Perf trace before:
>>
>> 99.63% 0.00% kthreadd [kernel.kallsyms] [.] kthread
>> |
>> ---kthread
>> 0xffffb33c12a26af8
>> |
>> |--98.13%--0xffffb33c12a26060
>> | |
>> | |--97.37%--free_contig_range
>> | | |
>> | | |--94.93%--___free_pages
>> | | | |
>> | | | |--55.42%--__free_frozen_pages
>> | | | | |
>> | | | | --43.20%--free_frozen_page_commit
>> | | | | |
>> | | | | --35.37%--_raw_spin_unlock_irqrestore
>> | | | |
>> | | | |--11.53%--_raw_spin_trylock
>> | | | |
>> | | | |--8.19%--__preempt_count_dec_and_test
>> | | | |
>> | | | |--5.64%--_raw_spin_unlock
>> | | | |
>> | | | |--2.37%--__get_pfnblock_flags_mask.isra.0
>> | | | |
>> | | | --1.07%--free_frozen_page_commit
>> | | |
>> | | --1.54%--__free_frozen_pages
>> | |
>> | --0.77%--___free_pages
>> |
>> --0.98%--0xffffb33c12a26078
>> alloc_pages_noprof
>>
>> Perf trace after:
>>
>> 8.42% 2.90% kthreadd [kernel.kallsyms] [k] __free_contig_range
>> |
>> |--5.52%--__free_contig_range
>> | |
>> | |--5.00%--free_prepared_contig_range
>> | | |
>> | | |--1.43%--__free_frozen_pages
>> | | | |
>> | | | --0.51%--free_frozen_page_commit
>> | | |
>> | | |--1.08%--_raw_spin_trylock
>> | | |
>> | | --0.89%--_raw_spin_unlock
>> | |
>> | --0.52%--free_pages_prepare
>> |
>> --2.90%--ret_from_fork
>> kthread
>> 0xffffae1c12abeaf8
>> 0xffffae1c12abe7a0
>> |
>> --2.69%--vfree
>> __free_contig_range
>>
>> Signed-off-by: Ryan Roberts <ryan.roberts@xxxxxxx>
>> Co-developed-by: Muhammad Usama Anjum <usama.anjum@xxxxxxx>
>> Signed-off-by: Muhammad Usama Anjum <usama.anjum@xxxxxxx>
>> ---
>> Changes since v3:
>> - Move __free_contig_range() to more generic __free_contig_range_common()
>> which will used to free frozen pages as well
>> - Simplify the loop in __free_contig_range_common()
>> - Rewrite the comment
>>
>> Changes since v2:
>> - Handle different possible section boundries in __free_contig_range()
>> - Drop the TODO
>> - Remove return value from __free_contig_range()
>> - Remove non-functional change from __free_pages_ok()
>>
>> Changes since v1:
>> - Rebase on mm-new
>> - Move FPI_PREPARED check inside __free_pages_prepare() now that
>> fpi_flags are already being passed.
>> - Add todo (Zi Yan)
>> - Rerun benchmarks
>> - Convert VM_BUG_ON_PAGE() to VM_WARN_ON_ONCE()
>> - Rework order calculation in free_prepared_contig_range() and use
>> MAX_PAGE_ORDER as high limit instead of pageblock_order as it must
>> be up to internal __free_frozen_pages() how it frees them
>> ---
>> include/linux/gfp.h | 2 +
>> mm/page_alloc.c | 103 +++++++++++++++++++++++++++++++++++++++++++-
>> 2 files changed, 103 insertions(+), 2 deletions(-)
>>
>> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
>> index f82d74a77cad8..7c1f9da7c8e56 100644
>> --- a/include/linux/gfp.h
>> +++ b/include/linux/gfp.h
>> @@ -467,6 +467,8 @@ void free_contig_frozen_range(unsigned long pfn, unsigned long nr_pages);
>> void free_contig_range(unsigned long pfn, unsigned long nr_pages);
>> #endif
>>
>> +void __free_contig_range(unsigned long pfn, unsigned long nr_pages);
>> +
>> DEFINE_FREE(free_page, void *, free_page((unsigned long)_T))
>>
>> #endif /* __LINUX_GFP_H */
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 75ee81445640b..18a96b51aa0be 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -91,6 +91,9 @@ typedef int __bitwise fpi_t;
>> /* Free the page without taking locks. Rely on trylock only. */
>> #define FPI_TRYLOCK ((__force fpi_t)BIT(2))
>>
>> +/* free_pages_prepare() has already been called for page(s) being freed. */
>> +#define FPI_PREPARED ((__force fpi_t)BIT(3))
>> +
>> /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
>> static DEFINE_MUTEX(pcp_batch_high_lock);
>> #define MIN_PERCPU_PAGELIST_HIGH_FRACTION (8)
>> @@ -1310,6 +1313,9 @@ __always_inline bool __free_pages_prepare(struct page *page,
>
> Hm I noticed the function isn't static, but it should be, and this is a good
> oportunity to make it so.
I'll make it static in the v5.
>
>> bool compound = PageCompound(page);
>> struct folio *folio = page_folio(page);
>>
>> + if (fpi_flags & FPI_PREPARED)
>> + return true;
>> +
>> VM_BUG_ON_PAGE(PageTail(page), page);
>>
>> trace_mm_page_free(page, order);
>
> ...
>
>> +/**
>> + * __free_contig_range - Free contiguous range of order-0 pages.
>> + * @pfn: Page frame number of the first page in the range.
>> + * @nr_pages: Number of pages to free.
>> + *
>> + * For each order-0 struct page in the physically contiguous range, put a
>> + * reference. Free any page who's reference count falls to zero. The
>> + * implementation is functionally equivalent to, but significantly faster than
>> + * calling __free_page() for each struct page in a loop.
>> + *
>> + * Memory allocated with alloc_pages(order>=1) then subsequently split to
>> + * order-0 with split_page() is an example of appropriate contiguous pages that
>> + * can be freed with this API.
>> + *
>> + * Context: May be called in interrupt context or while holding a normal
>> + * spinlock, but not in NMI context or while holding a raw spinlock.
>> + */
>> +void __free_contig_range(unsigned long pfn, unsigned long nr_pages)
>> +{
>> + __free_contig_range_common(pfn, nr_pages, false);
>> +}
>> +EXPORT_SYMBOL(__free_contig_range);
>
> I don't think the export is necessary for anything? Please drop.
No, I'll drop it.
>
>> +
>> #ifdef CONFIG_CONTIG_ALLOC
>> /* Usage: See admin-guide/dynamic-debug-howto.rst */
>> static void alloc_contig_dump_pages(struct list_head *page_list)
>> @@ -7330,8 +7430,7 @@ void free_contig_range(unsigned long pfn, unsigned long nr_pages)
>> if (WARN_ON_ONCE(PageHead(pfn_to_page(pfn))))
>> return;
>>
>> - for (; nr_pages--; pfn++)
>> - __free_page(pfn_to_page(pfn));
>> + __free_contig_range(pfn, nr_pages);
>> }
>> EXPORT_SYMBOL(free_contig_range);
>> #endif /* CONFIG_CONTIG_ALLOC */
>