Re: [PATCH v3 1/3] mm/page_alloc: Optimize free_contig_range()
From: David Hildenbrand (Arm)
Date: Tue Mar 24 2026 - 16:56:59 EST
> +void __free_contig_range(unsigned long pfn, unsigned long nr_pages)
> +{
> + struct page *page = pfn_to_page(pfn);
> + struct page *start = NULL;
> + unsigned long start_sec;
> + unsigned long i;
> + bool can_free;
> +
> + /*
> + * Chunk the range into contiguous runs of pages for which the refcount
> + * went to zero and for which free_pages_prepare() succeeded. If
> + * free_pages_prepare() fails we consider the page to have been freed;
> + * deliberately leak it.
> + *
> + * Code assumes contiguous PFNs have contiguous struct pages, but not
> + * vice versa. Break batches at section boundaries since pages from
> + * different sections must not be coalesced into a single high-order
> + * block.
The comment is not completely accurate: section boundary only applies to
some kernel configs.
Maybe rewrite the whole paragraph into
"Contiguous PFNs might not have a contiguous "struct pages" in some
kernel config. Therefore, check memdesc_section(), and stop batching
once it changes, see num_pages_contiguous()."
> + */
> + for (i = 0; i < nr_pages; i++, page++) {
> + VM_WARN_ON_ONCE(PageHead(page));
> + VM_WARN_ON_ONCE(PageTail(page));
> +
> + can_free = put_page_testzero(page);
> + if (can_free && !free_pages_prepare(page, 0))
> + can_free = false;
> +
> + if (can_free && start &&
> + memdesc_section(page->flags) != start_sec) {
> + free_prepared_contig_range(start, page - start);
> + start = page;
> + start_sec = memdesc_section(page->flags);
> + } else if (!can_free && start) {
> + free_prepared_contig_range(start, page - start);
> + start = NULL;
> + } else if (can_free && !start) {
> + start = page;
> + start_sec = memdesc_section(page->flags);
> + }
> + }
Simplification a proposed by Zi make sense to me!
--
Cheers,
David