Re: [Regression] mm:slab/sheaves: severe performance regression in cross-CPU slab allocation

From: Harry Yoo

Date: Wed Feb 25 2026 - 00:25:59 EST


On Tue, Feb 24, 2026 at 09:27:40PM +0100, Vlastimil Babka wrote:
> On 2/24/26 3:52 AM, Ming Lei wrote:
> > Hello Vlastimil and MM guys,
> >
> > The SLUB "sheaves" series merged via 815c8e35511d ("Merge branch
> > 'slab/for-7.0/sheaves' into slab/for-next") introduces a severe
> > performance regression for workloads with persistent cross-CPU
> > alloc/free patterns. ublk null target benchmark IOPS drops
> > significantly compared to v6.19: from ~36M IOPS to ~13M IOPS (~64%
> > drop).
> >
> > Bisecting within the sheaves series is blocked by a kernel panic at
> > 17c38c88294d ("slab: remove cpu (partial) slabs usage from allocation
> > paths"), so the exact first bad commit could not be identified.
> >
> > Reproducer
> > ==========
> >
> > Hardware: NUMA machine with >= 32 CPUs
> > Kernel: v7.0-rc (with slab/for-7.0/sheaves merged)
> >
> > # build kublk selftest
> > make -C tools/testing/selftests/ublk/
> >
> > # create ublk null target device with 16 queues
> > tools/testing/selftests/ublk/kublk add -t null -q 16
> >
> > # run fio/t/io_uring benchmark: 16 jobs, 20 seconds, non-polled
> > taskset -c 0-31 fio/t/io_uring -p0 -n 16 -r 20 /dev/ublkb0
> >
> > # cleanup
> > tools/testing/selftests/ublk/kublk del -n 0
> >
> > Good: v6.19 (and 41f1a08645ab, the mainline parent of the slab merge)
> > Bad: 815c8e35511d (Merge branch 'slab/for-7.0/sheaves' into slab/for-next)
> >
> > perf profile (bad kernel)
> > =========================
> >
> > ~47% of CPU time is spent in bio allocation hitting the SLUB slow path,
> > with massive spinlock contention on the node partial list lock:
> >
> > + 47.65% 1.21% io_uring [k] bio_alloc_bioset
> > - 44.87% 0.45% io_uring [k] kmem_cache_alloc_noprof
> > - 44.41% kmem_cache_alloc_noprof
> > - 43.89% ___slab_alloc
> > + 41.16% get_from_any_partial
>
> So this function is not used in the sheaf refill path, but in the
> fallback slowpath when alloc_from_pcs() fastpath fails.

Good point.

> > 0.91% get_from_partial_node
> > + 0.87% alloc_from_new_slab
> > + 0.65% allocate_slab
> > - 44.70% 0.21% io_uring [k] mempool_alloc_noprof
> > - 44.49% mempool_alloc_noprof
> > - 44.43% kmem_cache_alloc_noprof
>
> And I'd guess alloc_from_pcs() fails because in
> __pcs_replace_empty_main() we have gfpflags_allow_blocking() false,
> because mempool_alloc_noprof() tries the first attempt without
> __GFP_DIRECT_RECLAIM. So that will succeed, but we end up relying on the
> slowpath all the time and performance will drop.

That's very good point. I was missing that aspect.

> It made sense to me not to refill sheaves when we can't reclaim, but I
> didn't anticipate this interaction with mempools.

Me neither :)

> We could change them but there might be others using a similar pattern.

Probably, yes.

> Maybe it would be for the best to just drop that heuristic from
> __pcs_replace_empty_main()

Sounds fair.

> (but carefully as some deadlock avoidance depends on it, we might need
> to e.g. replace it with gfpflags_allow_spinning()). I'll send a patch
> tomorrow to test this theory, unless someone beats me to it (feel free to).

I think your point is valid. Let's give it a try.

> Until then IMHO we can dismiss the AI explanation and also the
> insufficient sheaf capacity theories.

Yeah :) let's first see how it performs after addressing your point.

--
Cheers,
Harry / Hyeonggon