Re: [PATCH mm-stable v3] mm/memcontrol: batch memcg charging in __memcg_slab_post_alloc_hook
From: teawater
Date: Wed Apr 01 2026 - 08:40:20 EST
>
> On Tue, Mar 31, 2026 at 08:32:30AM -0700, Shakeel Butt wrote:
>
> >
> > On Tue, Mar 31, 2026 at 05:17:07PM +0800, Hui Zhu wrote:
> > From: Hui Zhu <zhuhui@xxxxxxxxxx>
> >
> > When kmem_cache_alloc_bulk() allocates multiple objects, the post-alloc
> > hook __memcg_slab_post_alloc_hook() previously charged memcg one object
> > at a time, even though consecutive objects may reside on slabs backed by
> > the same pgdat node.
> >
> > Batch the memcg charging by scanning ahead from the current position to
> > find a contiguous run of objects whose slabs share the same pgdat, then
> > issue a single __obj_cgroup_charge() / __consume_obj_stock() call for
> > the entire run. The per-object obj_ext assignment loop is preserved as-is
> > since it cannot be further collapsed.
> >
> > This implements the TODO comment left in commit bc730030f956 ("memcg:
> > combine slab obj stock charging and accounting").
> >
> > The existing error-recovery contract is unchanged: if size == 1 then
> > memcg_alloc_abort_single() will free the sole object, and for larger
> > bulk allocations kmem_cache_free_bulk() will uncharge any objects that
> > were already charged before the failure.
> >
> > Benchmark using kmem_cache_alloc_bulk() with SLAB_ACCOUNT
> > (iters=100000):
> >
> > bulk=32 before: 215 ns/object after: 174 ns/object (-19%)
> > bulk=1 before: 344 ns/object after: 335 ns/object ( ~)
> >
> > No measurable regression for bulk=1, as expected.
> >
> > Signed-off-by: Hui Zhu <zhuhui@xxxxxxxxxx>
> >
> > Do we have an actual user of kmem_cache_alloc_bulk(GFP_ACCOUNT) in kernel?
> >
Hi Harry and Shakeel,
> Apparently we have a SLAB_ACCOUNT user in io_uring.c.
> (perhaps it's the only user?)
Looks like __io_alloc_req_refill is only user that call kmem_cache_alloc_bulk
with SLAB_ACCOUNT.
I am working on make a benchmark code for it.
Best,
Hui
>
> >
> > If yes, can you please benchmark that usage? Otherwise can we please wait for
> > an actual user before adding more complexity? Or you can look for opportunities
> > for kmem_cache_alloc_bulk(GFP_ACCOUNT) users and add the optimization along with
> > the user.
> >
> Good point. I was also wondering what are use cases benefiting
> from this beyond the microbenchmark.
>
> >
> > Have you looked at the bulk free side? I think we already have rcu freeing in
> > bulk as a user. Did you find any opportunities in optimizing the
> > __memcg_slab_free_hook() from bulk free?
> >
> Probably a bit out of scope but one thing to note on slab side:
> kfree_bulk() (called by kfree_rcu batching) doesn't specify slab cache,
> and it builds a detached freelist which contains objects from the same slab.
>
> On the other hand kmem_cache_free_bulk() with non-NULL slab cache
> simply calls free_to_pcs_bulk() and it passes objects one by one to
> __memcg_slab_free_hook() since objects may not come from the same slab.
>
> Now that we have sheaves enabled for (almost) all slab caches, it might
> be worth revisiting - e.g. sort objects by slab cache and
> pass them to free_to_pcs_bulk() instead of building a detached freelist.
>
> And let __memcg_slab_free_hook() handle objects from the same cache but
> from different slabs.
>
> --
> Cheers,
> Harry / Hyeonggon
>