Re: [PATCH 9/9] mm, slab/slub: move and improve cache_from_obj()

From: Kees Cook
Date: Thu Jun 18 2020 - 15:59:13 EST


On Thu, Jun 18, 2020 at 12:10:38PM +0200, Vlastimil Babka wrote:
>
> On 6/17/20 7:49 PM, Kees Cook wrote:
> > On Wed, Jun 10, 2020 at 06:31:35PM +0200, Vlastimil Babka wrote:
> >> The function cache_from_obj() was added by commit b9ce5ef49f00 ("sl[au]b:
> >> always get the cache from its page in kmem_cache_free()") to support kmemcg,
> >> where per-memcg cache can be different from the root one, so we can't use
> >> the kmem_cache pointer given to kmem_cache_free().
> >>
> >> Prior to that commit, SLUB already had debugging check+warning that could be
> >> enabled to compare the given kmem_cache pointer to one referenced by the slab
> >> page where the object-to-be-freed resides. This check was moved to
> >> cache_from_obj(). Later the check was also enabled for SLAB_FREELIST_HARDENED
> >> configs by commit 598a0717a816 ("mm/slab: validate cache membership under
> >> freelist hardening").
> >>
> >> These checks and warnings can be useful especially for the debugging, which can
> >> be improved. Commit 598a0717a816 changed the pr_err() with WARN_ON_ONCE() to
> >> WARN_ONCE() so only the first hit is now reported, others are silent. This
> >> patch changes it to WARN() so that all errors are reported.
> >>
> >> It's also useful to print SLUB allocation/free tracking info for the offending
> >> object, if tracking is enabled. We could export the SLUB print_tracking()
> >> function and provide an empty one for SLAB, or realize that both the debugging
> >> and hardening cases in cache_from_obj() are only supported by SLUB anyway. So
> >> this patch moves cache_from_obj() from slab.h to separate instances in slab.c
> >> and slub.c, where the SLAB version only does the kmemcg lookup and even could
> >
> > Oops. I made a mistake when I applied CONFIG_SLAB_FREELIST_HARDENED
> > here, I was thinking of SLAB_FREELIST_RANDOM's coverage (SLUB and SLAB),
> > and I see now that I never updated CONFIG_SLAB_FREELIST_HARDENED to
> > cover SLAB and SLOB.
> >
> > The point being: I still want the sanity check for the SLAB case under
> > hardening. This needs to stay a common function. The whole point is
> > to catch corruption from the wrong kmem_cache * being associated with
> > an object, and that's agnostic of slab/slub/slob.
> >
> > So, I'll send a follow-up to this patch to actually do what I had
> > originally intended for 598a0717a816 ("mm/slab: validate cache membership
> > under freelist hardening"), which wasn't intended to be SLUB-specific.
>
> To prvent the churn of your patch moving the cache_from_obj() back to slab.h, I
> think it's best if we modify my patch. The patch below should be squashed into
> the current version in mmots, with the commit log used for the whole result.
>
> This will cause conflicts while reapplying Roman's
> mm-memcg-slab-use-a-single-set-of-kmem_caches-for-all-allocations.patch which
> can be fixed by
> a) throwing away the conflicting hunks for cache_from_obj() in slab.c and slub.c
> b) applying this hunk instead:
>
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -455,12 +455,11 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
> struct kmem_cache *cachep;
>
> if (!IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) &&
> - !memcg_kmem_enabled() &&
> !kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS))
> return s;
>
> cachep = virt_to_cache(x);
> - if (WARN(cachep && !slab_equal_or_root(cachep, s),
> + if (WARN(cachep && cachep != s,
> "%s: Wrong slab cache. %s but object is from %s\n",
> __func__, s->name, cachep->name))
> print_tracking(cachep, x);
>
> The fixup patch itself:
> ----8<----
> From b8df607d92b37e5329ce7bda62b2b364cc249893 Mon Sep 17 00:00:00 2001
> From: Vlastimil Babka <vbabka@xxxxxxx>
> Date: Thu, 18 Jun 2020 11:52:03 +0200
> Subject: [PATCH] mm, slab/slub: improve error reporting and overhead of
> cache_from_obj()
>
> The function cache_from_obj() was added by commit b9ce5ef49f00 ("sl[au]b:
> always get the cache from its page in kmem_cache_free()") to support
> kmemcg, where per-memcg cache can be different from the root one, so we
> can't use the kmem_cache pointer given to kmem_cache_free().
>
> Prior to that commit, SLUB already had debugging check+warning that could
> be enabled to compare the given kmem_cache pointer to one referenced by
> the slab page where the object-to-be-freed resides. This check was moved
> to cache_from_obj(). Later the check was also enabled for
> SLAB_FREELIST_HARDENED configs by commit 598a0717a816 ("mm/slab: validate
> cache membership under freelist hardening").
>
> These checks and warnings can be useful especially for the debugging,
> which can be improved. Commit 598a0717a816 changed the pr_err() with
> WARN_ON_ONCE() to WARN_ONCE() so only the first hit is now reported,
> others are silent. This patch changes it to WARN() so that all errors are
> reported.
>
> It's also useful to print SLUB allocation/free tracking info for the offending
> object, if tracking is enabled. Thus, export the SLUB print_tracking() function
> and provide an empty one for SLAB.
>
> For SLUB we can also benefit from the static key check in
> kmem_cache_debug_flags(), but we need to move this function to slab.h and
> declare the static key there.
>
> [1] https://lore.kernel.org/r/20200608230654.828134-18-guro@xxxxxx
>
> Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx>

Acked-by: Kees Cook <keescook@xxxxxxxxxxxx>

I will rebase my fix for SLAB_FREELIST_HARDENED coverage on this.

Thanks!

--
Kees Cook