Re: [PATCH v5] mm/slub: defer freelist construction until after bulk allocation from a new slab
From: hu.shengming
Date: Mon Apr 13 2026 - 01:16:06 EST
Harry wrote:
> On Thu, Apr 09, 2026 at 08:43:52PM +0800, hu.shengming@xxxxxxxxxx wrote:
> > From: Shengming Hu <hu.shengming@xxxxxxxxxx>
> >
> > Allocations from a fresh slab can consume all of its objects, and the
> > freelist built during slab allocation is discarded immediately as a result.
> >
> > Instead of special-casing the whole-slab bulk refill case, defer freelist
> > construction until after objects are emitted from a fresh slab.
> > new_slab() now only allocates the slab and initializes its metadata.
> > refill_objects() then obtains a fresh slab and lets alloc_from_new_slab()
> > emit objects directly, building a freelist only for the objects left
> > unallocated; the same change is applied to alloc_single_from_new_slab().
> >
> > To keep CONFIG_SLAB_FREELIST_RANDOM=y/n on the same path, introduce a
> > small iterator abstraction for walking free objects in allocation order.
> > The iterator is used both for filling the sheaf and for building the
> > freelist of the remaining objects.
> >
> > Also mark setup_object() inline. After this optimization, the compiler no
> > longer consistently inlines this helper in the hot path, which can hurt
> > performance. Explicitly marking it inline restores the expected code
> > generation.
> >
> > This reduces per-object overhead when allocating from a fresh slab.
> > The most direct benefit is in the paths that allocate objects first and
> > only build a freelist for the remainder afterward: bulk allocation from
> > a new slab in refill_objects(), single-object allocation from a new slab
> > in ___slab_alloc(), and the corresponding early-boot paths that now use
> > the same deferred-freelist scheme. Since refill_objects() is also used to
> > refill sheaves, the optimization is not limited to the small set of
> > kmem_cache_alloc_bulk()/kmem_cache_free_bulk() users; regular allocation
> > workloads may benefit as well when they refill from a fresh slab.
> >
> > In slub_bulk_bench, the time per object drops by about 32% to 71% with
> > CONFIG_SLAB_FREELIST_RANDOM=n, and by about 52% to 70% with
> > CONFIG_SLAB_FREELIST_RANDOM=y. This benchmark is intended to isolate the
> > cost removed by this change: each iteration allocates exactly
> > slab->objects from a fresh slab. That makes it a near best-case scenario
> > for deferred freelist construction, because the old path still built a
> > full freelist even when no objects remained, while the new path avoids
> > that work. Realistic workloads may see smaller end-to-end gains depending
> > on how often allocations reach this fresh-slab refill path.
> >
> > Benchmark results (slub_bulk_bench):
> > Machine: qemu-system-x86 -m 1024M -smp 8 -enable-kvm -cpu host
> > Kernel: Linux 7.0.0-rc7-next-20260407
> > Config: x86_64_defconfig
> > Cpu: 0
> > Rounds: 20
> > Total: 256MB
>
> [...]
>
> Hi Shengming, it's been great to see how this patch has been improved
> since v1 to where it is now. Thanks for taking the feedback and steadily
> improving things along the way.
>
Hi Harry,
Thank you very much for your helpful reviews and suggestions from v1 through v5.
I really appreciate your patience and professionalism throughout the review process,
and I have learned a lot from your feedback.
> I think this is getting pretty close to being ready for mainline,
> with just one little thing to fix in the code.
>
> Other reviewers/maintainers may also take a look and leave comments
> when they get a chance.
>
I am also looking forward to any further comments or suggestions from
other reviewers and maintainers.
> > Link: https://github.com/HSM6236/slub_bulk_test.git
> > Signed-off-by: Shengming Hu <hu.shengming@xxxxxxxxxx>
> > ---
>
> If you think it's appropriate, please feel free to add:
> Suggested-by: Harry Yoo (Oracle) <harry@xxxxxxxxxx>
>
Sure, I will add:
Suggested-by: Harry Yoo (Oracle) <harry@xxxxxxxxxx>
Thanks again for your continued review and guidance.
> In case this was assisted by AI or other tools, please disclose that
> according to the process document:
>
> https://docs.kernel.org/process/generated-content.html
> https://docs.kernel.org/process/coding-assistants.html
>
> Not that I think this was assisted by AI, just mentioning because
> sometimes people using tools to develop the kernel are not aware that
> they need to disclose the fact. It wouldn't hurt to remind people :-)
>
Regarding AI disclosure: I only used an AI tool to polish the English wording
of the commit message, since I am not fully confident in my English writing. :-)
As I understand it, the documentation says that "spelling and grammar fix ups,
like rephrasing to imperative voice" are out of scope, so I believe an
Assisted-by tag is not needed in this case. Please let me know if you think otherwise.
> > Changes in v5:
> > - Call build_slab_freelist() unconditionally, and remove the redundant "slab->freelist = NULL" initialization in allocate_slab().
> > - Check the return value of alloc_from_new_slab() to prevent a potential use-after-free bug.
> > - Refine the commit message with more precise test coverage descriptions.
> > - Link to v4: https://lore.kernel.org/all/2026040823281824773ybHpC3kgUhR9OE1rGTl@xxxxxxxxxx/
> >
> > ---
> > mm/slab.h | 10 ++
> > mm/slub.c | 279 +++++++++++++++++++++++++++---------------------------
> > 2 files changed, 147 insertions(+), 142 deletions(-)
> >
> > diff --git a/mm/slub.c b/mm/slub.c
> > index 4927407c9699..9ff8af8c2f73 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -3696,22 +3686,30 @@ static void *alloc_single_from_new_slab(struct kmem_cache *s, struct slab *slab,
> > * corruption in theory could cause that.
> > * Leak memory of allocated slab.
> > */
> > - if (!allow_spin)
> > - spin_unlock_irqrestore(&n->list_lock, flags);
> > return NULL;
> > }
> >
> > - if (allow_spin)
> > + n = get_node(s, slab_nid(slab));
> > + if (allow_spin) {
> > spin_lock_irqsave(&n->list_lock, flags);
> > + } else if (!spin_trylock_irqsave(&n->list_lock, flags)) {
> > + /*
> > + * Unlucky, discard newly allocated slab.
> > + * The slab is not fully free, but it's fine as
> > + * objects are not allocated to users.
> > + */
> > + free_new_slab_nolock(s, slab);
> > + return NULL;
> > + }
> >
> > - if (slab->inuse == slab->objects)
> > - add_full(s, n, slab);
> > - else
> > + if (needs_add_partial)
> > add_partial(n, slab, ADD_TO_HEAD);
> > + else
> > + add_full(s, n, slab);
> >
> > - inc_slabs_node(s, nid, slab->objects);
> > spin_unlock_irqrestore(&n->list_lock, flags);
> >
> > + inc_slabs_node(s, slab_nid(slab), slab->objects);
>
> Ouch, I didn't catch this when it was added in v4. When slab debugging
> feature is enabled for the cache, inc_slabs_node() should be done within
> the spinlock to avoid race conditions with slab validation.
>
> Perhaps it's worth adding a comment mentioning this :)
>
> See commit c7323a5ad078 ("mm/slub: restrict sysfs validation to debug
> caches and make it safe") for more details.
>
> With this fixed, please feel free to add:
> Reviewed-by: Harry Yoo (Oracle) <harry@xxxxxxxxxx>
>
You are right about the inc_slabs_node() placement. I missed that change when
it was introduced in v4. Thank you very much for catching it.
After reading commit c7323a5ad078 ("mm/slub: restrict sysfs validation to debug
caches and make it safe"), my understanding is that inc_slabs_node() should
remain under n->list_lock for debug caches, so that validation cannot observe
inconsistent state during list transitions. I will fix that in the next revision
and add a comment along these lines.
Would a comment like the following look good? :-)
/*
* Debug caches require nr_slabs updates under n->list_lock so validation
* cannot race with list transitions and observe inconsistent state.
*/
Thank you again for the careful review.
--
Cheers,
Shengming
> --
> Cheers,
> Harry / Hyeonggon