Re: [PATCH] mm/slub: Avoid recursive loop with kmemleak

From: Andrew Morton
Date: Thu Apr 25 2024 - 19:49:25 EST


On Thu, 25 Apr 2024 14:30:55 -0700 Suren Baghdasaryan <surenb@xxxxxxxxxx> wrote:

> > > --- a/mm/kmemleak.c
> > > +++ b/mm/kmemleak.c
> > > @@ -463,7 +463,7 @@ static struct kmemleak_object *mem_pool_alloc(gfp_t gfp)
> > >
> > > /* try the slab allocator first */
> > > if (object_cache) {
> > > - object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp));
> > > + object = kmem_cache_alloc_noprof(object_cache, gfp_kmemleak_mask(gfp));
> >
> > What do these get accounted to, or does this now pop a warning with
> > CONFIG_MEM_ALLOC_PROFILING_DEBUG?
>
> Thanks for the fix, Kees!
> I'll look into this recursion more closely to see if there is a better
> way to break it. As a stopgap measure seems ok to me. I also think
> it's unlikely that one would use both tracking mechanisms on the same
> system.

I'd really like to start building mm-stable without having to route
around memprofiling. How about I include Kees's patch in that for now?