Re: [PATCH 5.10 518/530] mm, slub: enable slub_debug static key when creating cache with explicit debug flags
From: Vlastimil Babka
Date: Wed May 12 2021 - 12:19:45 EST
On 5/12/21 4:50 PM, Greg Kroah-Hartman wrote:
> From: Vlastimil Babka <vbabka@xxxxxxx>
>
> [ Upstream commit 1f0723a4c0df36cbdffc6fac82cd3c5d57e06d66 ]
>
> Commit ca0cab65ea2b ("mm, slub: introduce static key for slub_debug()")
> introduced a static key to optimize the case where no debugging is
> enabled for any cache. The static key is enabled when slub_debug boot
> parameter is passed, or CONFIG_SLUB_DEBUG_ON enabled.
>
> However, some caches might be created with one or more debugging flags
> explicitly passed to kmem_cache_create(), and the commit missed this.
> Thus the debugging functionality would not be actually performed for
> these caches unless the static key gets enabled by boot param or config.
>
> This patch fixes it by checking for debugging flags passed to
> kmem_cache_create() and enabling the static key accordingly.
>
> Note such explicit debugging flags should not be used outside of
> debugging and testing as they will now enable the static key globally.
> btrfs_init_cachep() creates a cache with SLAB_RED_ZONE but that's a
> mistake that's being corrected [1]. rcu_torture_stats() creates a cache
> with SLAB_STORE_USER, but that is a testing module so it's OK and will
> start working as intended after this patch.
>
> Also note that in case of backports to kernels before v5.12 that don't
> have 59450bbc12be ("mm, slab, slub: stop taking cpu hotplug lock"),
> static_branch_enable_cpuslocked() should be used.
>
> [1] https://lore.kernel.org/linux-btrfs/20210315141824.26099-1-dsterba@xxxxxxxx/
>
> Link: https://lkml.kernel.org/r/20210315153415.24404-1-vbabka@xxxxxxx
> Fixes: ca0cab65ea2b ("mm, slub: introduce static key for slub_debug()")
> Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx>
> Reported-by: Oliver Glitta <glittao@xxxxxxxxx>
> Acked-by: David Rientjes <rientjes@xxxxxxxxxx>
> Cc: Christoph Lameter <cl@xxxxxxxxx>
> Cc: Pekka Enberg <penberg@xxxxxxxxxx>
> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx>
> Cc: "Paul E. McKenney" <paulmck@xxxxxxxxxx>
> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>
Uh, rather not release this to stable without the followup fix:
https://lore.kernel.org/linux-mm/20210504120019.26791-1-vbabka@xxxxxxx/
> ---
> mm/slub.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 05a501b67cd5..e4f7978d43c2 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3779,6 +3779,15 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
>
> static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags)
> {
> +#ifdef CONFIG_SLUB_DEBUG
> + /*
> + * If no slub_debug was enabled globally, the static key is not yet
> + * enabled by setup_slub_debug(). Enable it if the cache is being
> + * created with any of the debugging flags passed explicitly.
> + */
> + if (flags & SLAB_DEBUG_FLAGS)
> + static_branch_enable(&slub_debug_enabled);
> +#endif
> s->flags = kmem_cache_flags(s->size, flags, s->name);
> #ifdef CONFIG_SLAB_FREELIST_HARDENED
> s->random = get_random_long();
>