Re: [PATCH v3 1/3] mm/page_alloc: Optimize free_contig_range()
From: Muhammad Usama Anjum
Date: Wed Mar 25 2026 - 10:20:51 EST
On 24/03/2026 8:56 pm, David Hildenbrand (Arm) wrote:
>
>> +void __free_contig_range(unsigned long pfn, unsigned long nr_pages)
>> +{
>> + struct page *page = pfn_to_page(pfn);
>> + struct page *start = NULL;
>> + unsigned long start_sec;
>> + unsigned long i;
>> + bool can_free;
>> +
>> + /*
>> + * Chunk the range into contiguous runs of pages for which the refcount
>> + * went to zero and for which free_pages_prepare() succeeded. If
>> + * free_pages_prepare() fails we consider the page to have been freed;
>> + * deliberately leak it.
>> + *
>> + * Code assumes contiguous PFNs have contiguous struct pages, but not
>> + * vice versa. Break batches at section boundaries since pages from
>> + * different sections must not be coalesced into a single high-order
>> + * block.
>
> The comment is not completely accurate: section boundary only applies to
> some kernel configs.
>
> Maybe rewrite the whole paragraph into
>
> "Contiguous PFNs might not have a contiguous "struct pages" in some
> kernel config. Therefore, check memdesc_section(), and stop batching
> once it changes, see num_pages_contiguous()."
Agreed, I'll update.
>
>> + */
>> + for (i = 0; i < nr_pages; i++, page++) {
>> + VM_WARN_ON_ONCE(PageHead(page));
>> + VM_WARN_ON_ONCE(PageTail(page));
>> +
>> + can_free = put_page_testzero(page);
>> + if (can_free && !free_pages_prepare(page, 0))
>> + can_free = false;
>> +
>> + if (can_free && start &&
>> + memdesc_section(page->flags) != start_sec) {
>> + free_prepared_contig_range(start, page - start);
>> + start = page;
>> + start_sec = memdesc_section(page->flags);
>> + } else if (!can_free && start) {
>> + free_prepared_contig_range(start, page - start);
>> + start = NULL;
>> + } else if (can_free && !start) {
>> + start = page;
>> + start_sec = memdesc_section(page->flags);
>> + }
>> + }
>
> Simplification a proposed by Zi make sense to me!
I've added it.
Thanks,
Usama