Re: [RFC] mm: kmemleak: replace __GFP_NOFAIL to GFP_NOWAIT in gfp_kmemleak_mask

From: Chunyu Hu
Date: Wed Apr 25 2018 - 10:33:55 EST




----- Original Message -----
> From: "Catalin Marinas" <catalin.marinas@xxxxxxx>
> To: "Chunyu Hu" <chuhu@xxxxxxxxxx>
> Cc: "Michal Hocko" <mhocko@xxxxxxxxxx>, "Chunyu Hu" <chuhu.ncepu@xxxxxxxxx>, "Dmitry Vyukov" <dvyukov@xxxxxxxxxx>,
> "LKML" <linux-kernel@xxxxxxxxxxxxxxx>, "Linux-MM" <linux-mm@xxxxxxxxx>
> Sent: Wednesday, April 25, 2018 8:51:55 PM
> Subject: Re: [RFC] mm: kmemleak: replace __GFP_NOFAIL to GFP_NOWAIT in gfp_kmemleak_mask
>
> On Wed, Apr 25, 2018 at 05:50:41AM -0400, Chunyu Hu wrote:
> > ----- Original Message -----
> > > From: "Catalin Marinas" <catalin.marinas@xxxxxxx>
> > > On Tue, Apr 24, 2018 at 07:20:57AM -0600, Michal Hocko wrote:
> > > > On Mon 23-04-18 12:17:32, Chunyu Hu wrote:
> > > > [...]
> > > > > So if there is a new flag, it would be the 25th bits.
> > > >
> > > > No new flags please. Can you simply store a simple bool into
> > > > fail_page_alloc
> > > > and have save/restore api for that?
> > >
> > > For kmemleak, we probably first hit failslab. Something like below may
> > > do the trick:
> > >
> > > diff --git a/mm/failslab.c b/mm/failslab.c
> > > index 1f2f248e3601..63f13da5cb47 100644
> > > --- a/mm/failslab.c
> > > +++ b/mm/failslab.c
> > > @@ -29,6 +29,9 @@ bool __should_failslab(struct kmem_cache *s, gfp_t
> > > gfpflags)
> > > if (failslab.cache_filter && !(s->flags & SLAB_FAILSLAB))
> > > return false;
> > >
> > > + if (s->flags & SLAB_NOLEAKTRACE)
> > > + return false;
> > > +
> > > return should_fail(&failslab.attr, s->object_size);
> > > }
> >
> > This maybe is the easy enough way for skipping fault injection for
> > kmemleak slab object.
>
> This was added to avoid kmemleak tracing itself, so could be used for
> other kmemleak-related cases.
>
> > > Can we get a second should_fail() via should_fail_alloc_page() if a new
> > > slab page is allocated?
> >
> > looking at code path below, what do you mean by getting a second
> > should_fail() via fail_alloc_page?
>
> Kmemleak calls kmem_cache_alloc() on a cache with SLAB_LEAKNOTRACE, so the
> first point of failure injection is __should_failslab() which we can
> handle with the slab flag. The slab allocator itself ends up calling
> alloc_pages() to allocate a slab page (and __GFP_NOFAIL is explicitly
> cleared). Here we have the second potential failure injection via

Indeed.

> fail_alloc_page(). That's unless the order < fail_page_alloc.min_order
> which I think is the default case (min_order = 1 while the slab page
> allocation for kmemleak would need an order of 0. It's not ideal but we
> may get away with it.

In my workstation, I checked the value shown is order=2

[mm]# cat /sys/kernel/slab/kmemleak_object/order
2
[mm]# uname -r
4.17.0-rc1.syzcaller+


If order is 2, then not into the branch, no false is returned, so not skipped..
static bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
{
if (order < fail_page_alloc.min_order)
return false;


>
> > Seems we need to insert the flag between alloc_slab_page and
> > alloc_pages()? Without GFP flag, it's difficult to pass info to
> > should_fail_alloc_page and keep simple at same time.
>
> Indeed.
>
> > Or as Michal suggested, completely disabling page alloc fail injection
> > when kmemleak enabled. And enable it again when kmemleak off.
>
> Dmitry's point was that kmemleak is still useful to detect leaks on the
> error path where errors are actually introduced by the fault injection.
> Kmemleak cannot cope with allocation failures as it needs a pretty
> precise tracking of the allocated objects.

understand.

>
> An alternative could be to not free the early_log buffer in kmemleak and
> use that memory in an emergency when allocation fails (though I don't
> particularly like this).
>
> Yet another option is to use NOFAIL and remove NORETRY in kmemleak when
> fault injection is enabled.

I'm going to have a try this way to see if any warning can be seen when running.
This should be the best if it works fine.

>
> --
> Catalin
>

--
Regards,
Chunyu Hu