Re: [PATCH v3] mm/slub: initialize stack depot in boot process
From: Hyeonggon Yoo
Date: Tue Mar 01 2022 - 04:30:03 EST
On Tue, Mar 01, 2022 at 10:14:30AM +0100, Vlastimil Babka wrote:
> On 3/1/22 09:51, Hyeonggon Yoo wrote:
> > commit ba10d4b46655 ("mm/slub: use stackdepot to save stack trace in
> > objects") initializes stack depot in cache creation if SLAB_STORE_USER
> > flag is set.
>
> As pointed out, this is not a stable commit, the series was just posted for
> review and there will be v2. So instead of "this fixes the commit..." I
> suggest writing the patch assuming it's a preparation for the patch
> "mm/slub: use stackdepot"... and I can then make it part of the series.
> So it should instead explain that for slub_debug we will need a way to
> trigger stack_depot_early_init() based on boot options and so this patch
> introduces it...
>
Agreed.
> > This can make kernel crash because a cache can be crashed in various
> > contexts. For example if user sets slub_debug=U, kernel crashes
> > because create_boot_cache() calls stack_depot_init(), which tries to
> > allocate hash table using memblock_alloc() if slab is not available.
> > But memblock is also not available at that time.
> >
> > This patch solves the problem by initializing stack depot early
> > in boot process if SLAB_STORE_USER debug flag is set globally
> > or the flag is set for at least one cache.
> >
> > [ elver@xxxxxxxxxx: initialize stack depot depending on slub_debug
> > parameter instead of allowing stack_depot_init() to be called
> > during kmem_cache_init() for simplicity. ]
> >
> > [ vbabka@xxxxxxx: parse slub_debug parameter in setup_slub_debug()
> > and initialize stack depot in stack_depot_early_init(). ]
> >
> > [ lkp@xxxxxxxxx: Fix build error. ]
> >
> > Link: https://lore.kernel.org/all/YhyeaP8lrzKgKm5A@xxxxxxxxxxxxxxxxxxx-northeast-1.compute.internal/
> > Fixes: ba10d4b46655 ("mm/slub: use stackdepot to save stack trace in objects")
> > Signed-off-by: Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx>
> > ---
> > include/linux/slab.h | 1 +
> > include/linux/stackdepot.h | 3 ++-
> > mm/slab.c | 5 +++++
> > mm/slob.c | 5 +++++
> > mm/slub.c | 19 ++++++++++++++++---
> > 5 files changed, 29 insertions(+), 4 deletions(-)
> >
> > diff --git a/include/linux/slab.h b/include/linux/slab.h
> > index 37bde99b74af..d2b0f8f9e5e6 100644
> > --- a/include/linux/slab.h
> > +++ b/include/linux/slab.h
> > @@ -762,6 +762,7 @@ extern void kvfree_sensitive(const void *addr, size_t len);
> >
> > unsigned int kmem_cache_size(struct kmem_cache *s);
> > void __init kmem_cache_init_late(void);
> > +int __init slab_stack_depot_init(void);
> >
> > #if defined(CONFIG_SMP) && defined(CONFIG_SLAB)
> > int slab_prepare_cpu(unsigned int cpu);
> > diff --git a/include/linux/stackdepot.h b/include/linux/stackdepot.h
> > index 17f992fe6355..a813a2673c48 100644
> > --- a/include/linux/stackdepot.h
> > +++ b/include/linux/stackdepot.h
> > @@ -12,6 +12,7 @@
> > #define _LINUX_STACKDEPOT_H
> >
> > #include <linux/gfp.h>
> > +#include <linux/slab.h>
> >
> > typedef u32 depot_stack_handle_t;
> >
> > @@ -32,7 +33,7 @@ int stack_depot_init(void);
> > #ifdef CONFIG_STACKDEPOT_ALWAYS_INIT
> > static inline int stack_depot_early_init(void) { return stack_depot_init(); }
> > #else
> > -static inline int stack_depot_early_init(void) { return 0; }
> > +static inline int stack_depot_early_init(void) { return slab_stack_depot_init(); }
> > #endif
>
> I think the approach should be generic for stackdepot, not tied to a
> function that belongs to slab with 3 different implementations.
> E.g. in stackdepot.h declare a variable e.g. "stack_depot_want_early_init"
> that is checked in stack_depot_early_init() above to call stack_depot_init().
Hmm yeah if we define it in stack depot, that would be nice.
I just didn't want to expose global variable that is specific to slub.
> > depot_stack_handle_t stack_depot_save(unsigned long *entries,
> > diff --git a/mm/slab.c b/mm/slab.c
> > index ddf5737c63d9..c7f929665fbe 100644
> > --- a/mm/slab.c
> > +++ b/mm/slab.c
> > @@ -1196,6 +1196,11 @@ static void __init set_up_node(struct kmem_cache *cachep, int index)
> > }
> > }
> >
> > +int __init slab_stack_depot_init(void)
> > +{
> > + return 0;
> > +}
> > +
> > /*
> > * Initialisation. Called after the page allocator have been initialised and
> > * before smp_init().
> > diff --git a/mm/slob.c b/mm/slob.c
> > index 60c5842215f1..7597c219f061 100644
> > --- a/mm/slob.c
> > +++ b/mm/slob.c
> > @@ -725,3 +725,8 @@ void __init kmem_cache_init_late(void)
> > {
> > slab_state = FULL;
> > }
> > +
> > +int __init slab_stack_depot_init(void)
> > +{
> > + return 0;
> > +}
> > diff --git a/mm/slub.c b/mm/slub.c
> > index a74afe59a403..8f130f917977 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -646,6 +646,16 @@ static slab_flags_t slub_debug;
> >
> > static char *slub_debug_string;
> > static int disable_higher_order_debug;
> > +static bool __initdata init_stack_depot;
> > +
> > +int __init slab_stack_depot_init(void)
> > +{
> > +#ifdef CONFIG_STACKDEPOT
> > + if (init_stack_depot)
> > + stack_depot_init();
> > +#endif
> > + return 0;
> > +}
>
>
>
> > /*
> > * slub is about to manipulate internal object metadata. This memory lies
> > @@ -1531,6 +1541,8 @@ static int __init setup_slub_debug(char *str)
> > global_slub_debug_changed = true;
> > } else {
> > slab_list_specified = true;
> > + if (flags & SLAB_STORE_USER)
> > + init_stack_depot = true;
> > }
> > }
> >
> > @@ -1546,6 +1558,10 @@ static int __init setup_slub_debug(char *str)
> > global_flags = slub_debug;
> > slub_debug_string = saved_str;
> > }
> > +
> > + if (global_flags & SLAB_STORE_USER)
> > + init_stack_depot = true;
>
> This looks good, it would just set the "stack_depot_want_early_init"
> variable instead. But logically should be part of "mm/slub: use
> stackdepot...", not part of patch that introduces the variable. That so if
> you don't mind I would move it there with credit.
>
Oh I don't mind that.
Then I will expect this to be solved in next series...
I'm looking forward to next version. Thanks!
--
Thank you, You are awesome!
Hyeonggon :-)
> > out:
> > slub_debug = global_flags;
> > if (slub_debug != 0 || slub_debug_string)
> > @@ -4221,9 +4237,6 @@ static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags)
> > s->remote_node_defrag_ratio = 1000;
> > #endif
> >
> > - if (s->flags & SLAB_STORE_USER && IS_ENABLED(CONFIG_STACKDEPOT))
> > - stack_depot_init();
> > -
> > /* Initialize the pre-computed randomized freelist if slab is up */
> > if (slab_state >= UP) {
> > if (init_cache_random_seq(s))
>