Re: [PATCH] mm/slab: fix an incorrect check in obj_exts_alloc_size()
From: vbabka
Date: Mon Mar 09 2026 - 10:00:32 EST
On 3/9/26 08:22, Harry Yoo wrote:
> obj_exts_alloc_size() prevents recursive allocation of slabobj_ext
> array from the same cache, to avoid creating slabs that are never freed.
>
> There is one mistake that returns the original size when memory
> allocation profiling is disabled. The assumption was that
> memcg-triggered slabobj_ext allocation is always served from
> KMALLOC_CGROUP type. But this is wrong [1]: when the caller specifies
> both __GFP_RECLAIMABLE and __GFP_ACCOUNT with SLUB_TINY enabled, the
> allocation is served from normal kmalloc. This is because kmalloc_type()
> prioritizes __GFP_RECLAIMABLE over __GFP_ACCOUNT, and SLUB_TINY aliases
> KMALLOC_RECLAIM with KMALLOC_NORMAL.
Hm that's suboptimal (leads to sparsely used obj_exts in normal kmalloc
slabs) and maybe separately from this hotfix we could make sure that with
SLUB_TINY, __GFP_ACCOUNT is preferred going forward?
> As a result, the recursion guard is bypassed and the problematic slabs
> can be created. Fix this by removing the mem_alloc_profiling_enabled()
> check entirely. The remaining is_kmalloc_normal() check is still
> sufficient to detect whether the cache is of KMALLOC_NORMAL type and
> avoid bumping the size if it's not.
>
> Without SLUB_TINY, no functional change intended.
> With SLUB_TINY, allocations with __GFP_ACCOUNT|__GFP_RECLAIMABLE
> now allocate a larger array if the sizes equal.
>
> Reported-by: Zw Tang <shicenci@xxxxxxxxx>
> Fixes: 280ea9c3154b ("mm/slab: avoid allocating slabobj_ext array from its own slab")
> Closes: https://lore.kernel.org/linux-mm/CAPHJ_VKuMKSke8b11AZQw1PTSFN4n2C0gFxC6xGOG0ZLHgPmnA@xxxxxxxxxxxxxx [1]
> Cc: stable@xxxxxxxxxxxxxxx
> Signed-off-by: Harry Yoo <harry.yoo@xxxxxxxxxx>
Added to slab/for-next-fixes, thanks!
> ---
>
> Zw Tang, could you please confirm that the warning disappears
> on your test environment, with this patch applied?
>
> mm/slub.c | 7 -------
> 1 file changed, 7 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 20cb4f3b636d..6371838d2352 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2119,13 +2119,6 @@ static inline size_t obj_exts_alloc_size(struct kmem_cache *s,
> size_t sz = sizeof(struct slabobj_ext) * slab->objects;
> struct kmem_cache *obj_exts_cache;
>
> - /*
> - * slabobj_ext array for KMALLOC_CGROUP allocations
> - * are served from KMALLOC_NORMAL caches.
> - */
> - if (!mem_alloc_profiling_enabled())
> - return sz;
> -
> if (sz > KMALLOC_MAX_CACHE_SIZE)
> return sz;
>
>
> base-commit: 6432f15c818cb30eec7c4ca378ecdebd9796f741