Re: [PATCH v3 2/3] vmalloc: Optimize vfree
From: Uladzislau Rezki
Date: Wed Mar 25 2026 - 12:27:30 EST
On Wed, Mar 25, 2026 at 03:02:14PM +0000, Muhammad Usama Anjum wrote:
> On 25/03/2026 8:56 am, Uladzislau Rezki wrote:
> > On Tue, Mar 24, 2026 at 10:55:55AM -0400, Zi Yan wrote:
> >> On 24 Mar 2026, at 9:35, Muhammad Usama Anjum wrote:
> >>
> >>> From: Ryan Roberts <ryan.roberts@xxxxxxx>
> >>>
> >>> Whenever vmalloc allocates high order pages (e.g. for a huge mapping) it
> >>> must immediately split_page() to order-0 so that it remains compatible
> >>> with users that want to access the underlying struct page.
> >>> Commit a06157804399 ("mm/vmalloc: request large order pages from buddy
> >>> allocator") recently made it much more likely for vmalloc to allocate
> >>> high order pages which are subsequently split to order-0.
> >>>
> >>> Unfortunately this had the side effect of causing performance
> >>> regressions for tight vmalloc/vfree loops (e.g. test_vmalloc.ko
> >>> benchmarks). See Closes: tag. This happens because the high order pages
> >>> must be gotten from the buddy but then because they are split to
> >>> order-0, when they are freed they are freed to the order-0 pcp.
> >>> Previously allocation was for order-0 pages so they were recycled from
> >>> the pcp.
> >>>
> >>> It would be preferable if when vmalloc allocates an (e.g.) order-3 page
> >>> that it also frees that order-3 page to the order-3 pcp, then the
> >>> regression could be removed.
> >>>
> >>> So let's do exactly that; use the new __free_contig_range() API to
> >>> batch-free contiguous ranges of pfns. This not only removes the
> >>> regression, but significantly improves performance of vfree beyond the
> >>> baseline.
> >>>
> >>> A selection of test_vmalloc benchmarks running on arm64 server class
> >>> system. mm-new is the baseline. Commit a06157804399 ("mm/vmalloc: request
> >>> large order pages from buddy allocator") was added in v6.19-rc1 where we
> >>> see regressions. Then with this change performance is much better. (>0
> >>> is faster, <0 is slower, (R)/(I) = statistically significant
> >>> Regression/Improvement):
> >>>
> >>> +-----------------+----------------------------------------------------------+-------------------+--------------------+
> >>> | Benchmark | Result Class | mm-new | this series |
> >>> +=================+==========================================================+===================+====================+
> >>> | micromm/vmalloc | fix_align_alloc_test: p:1, h:0, l:500000 (usec) | 1331843.33 | (I) 67.17% |
> >>> | | fix_size_alloc_test: p:1, h:0, l:500000 (usec) | 415907.33 | -5.14% |
> >>> | | fix_size_alloc_test: p:4, h:0, l:500000 (usec) | 755448.00 | (I) 53.55% |
> >>> | | fix_size_alloc_test: p:16, h:0, l:500000 (usec) | 1591331.33 | (I) 57.26% |
> >>> | | fix_size_alloc_test: p:16, h:1, l:500000 (usec) | 1594345.67 | (I) 68.46% |
> >>> | | fix_size_alloc_test: p:64, h:0, l:100000 (usec) | 1071826.00 | (I) 79.27% |
> >>> | | fix_size_alloc_test: p:64, h:1, l:100000 (usec) | 1018385.00 | (I) 84.17% |
> >>> | | fix_size_alloc_test: p:256, h:0, l:100000 (usec) | 3970899.67 | (I) 77.01% |
> >>> | | fix_size_alloc_test: p:256, h:1, l:100000 (usec) | 3821788.67 | (I) 89.44% |
> >>> | | fix_size_alloc_test: p:512, h:0, l:100000 (usec) | 7795968.00 | (I) 82.67% |
> >>> | | fix_size_alloc_test: p:512, h:1, l:100000 (usec) | 6530169.67 | (I) 118.09% |
> >>> | | full_fit_alloc_test: p:1, h:0, l:500000 (usec) | 626808.33 | -0.98% |
> >>> | | kvfree_rcu_1_arg_vmalloc_test: p:1, h:0, l:500000 (usec) | 532145.67 | -1.68% |
> >>> | | kvfree_rcu_2_arg_vmalloc_test: p:1, h:0, l:500000 (usec) | 537032.67 | -0.96% |
> >>> | | long_busy_list_alloc_test: p:1, h:0, l:500000 (usec) | 8805069.00 | (I) 74.58% |
> >>> | | pcpu_alloc_test: p:1, h:0, l:500000 (usec) | 500824.67 | 4.35% |
> >>> | | random_size_align_alloc_test: p:1, h:0, l:500000 (usec) | 1637554.67 | (I) 76.99% |
> >>> | | random_size_alloc_test: p:1, h:0, l:500000 (usec) | 4556288.67 | (I) 72.23% |
> >>> | | vm_map_ram_test: p:1, h:0, l:500000 (usec) | 107371.00 | -0.70% |
> >>> +-----------------+----------------------------------------------------------+-------------------+--------------------+
> >>>
> >>> Fixes: a06157804399 ("mm/vmalloc: request large order pages from buddy allocator")
> >>> Closes: https://lore.kernel.org/all/66919a28-bc81-49c9-b68f-dd7c73395a0d@xxxxxxx/
> >>> Signed-off-by: Ryan Roberts <ryan.roberts@xxxxxxx>
> >>> Co-developed-by: Muhammad Usama Anjum <usama.anjum@xxxxxxx>
> >>> Signed-off-by: Muhammad Usama Anjum <usama.anjum@xxxxxxx>
> >>> ---
> >>> Changes since v2:
> >>> - Remove BUG_ON in favour of simple implementation as this has never
> >>> been seen to output any bug in the past as well
> >>> - Move the free loop to separate function, free_pages_bulk()
> >>> - Update stats, lruvec_stat in separate loop
> >>>
> >>> Changes since v1:
> >>> - Rebase on mm-new
> >>> - Rerun benchmarks
> >>>
> >>> Made-with: Cursor
> >>> ---
> >>> include/linux/gfp.h | 2 ++
> >>> mm/page_alloc.c | 23 +++++++++++++++++++++++
> >>> mm/vmalloc.c | 16 +++++-----------
> >>> 3 files changed, 30 insertions(+), 11 deletions(-)
> >>>
> >>> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> >>> index 7c1f9da7c8e56..71f9097ab99a0 100644
> >>> --- a/include/linux/gfp.h
> >>> +++ b/include/linux/gfp.h
> >>> @@ -239,6 +239,8 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid,
> >>> struct page **page_array);
> >>> #define __alloc_pages_bulk(...) alloc_hooks(alloc_pages_bulk_noprof(__VA_ARGS__))
> >>>
> >>> +void free_pages_bulk(struct page **page_array, unsigned long nr_pages);
> >>> +
> >>> unsigned long alloc_pages_bulk_mempolicy_noprof(gfp_t gfp,
> >>> unsigned long nr_pages,
> >>> struct page **page_array);
> >>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> >>> index eedce9a30eb7e..250cc07e547b8 100644
> >>> --- a/mm/page_alloc.c
> >>> +++ b/mm/page_alloc.c
> >>> @@ -5175,6 +5175,29 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid,
> >>> }
> >>> EXPORT_SYMBOL_GPL(alloc_pages_bulk_noprof);
> >>>
> >>> +void free_pages_bulk(struct page **page_array, unsigned long nr_pages)
> >>> +{
> >>> + unsigned long start_pfn = 0, pfn;
> >>> + unsigned long i, nr_contig = 0;
> >>> +
> >>> + for (i = 0; i < nr_pages; i++) {
> >>> + pfn = page_to_pfn(page_array[i]);
> >>> + if (!nr_contig) {
> >>> + start_pfn = pfn;
> >>> + nr_contig = 1;
> >>> + } else if (start_pfn + nr_contig != pfn) {
> >>> + __free_contig_range(start_pfn, nr_contig);
> >>> + start_pfn = pfn;
> >>> + nr_contig = 1;
> >>> + cond_resched();
> >>
> > It will cause schedule while atomic. Have you checked that
> > __free_contig_range() also can sleep? Of so then we are aligned, if not
> > probably we should remove it.
> Sorry, I didn't get it. How does having cond_resched() in this function
> affects __free_contig_range()?
>
It is not. What i am asking is about:
<snip>
spin_lock();
free_pages_bulk()
...
<snip>
so this is not allowed because there is cond_resched() call. We
can remove it and make it possible to invoke free_pages_bulk() under
spin-lock, __but__ only if for example other calls do not sleep:
__free_contig_range()
memdesc_section()
free_prepared_contig_range()
...
>
> The current user of this function is only vfree() which is sleepable.
>
I know. But this function can be used by others soon or later.
Another option is add a comment, saying that it is only for sleepable
contexts.
--
Uladzislau Rezki