Re: [PATCH 9/9] mm, slab/slub: move and improve cache_from_obj()
From: Kees Cook
Date: Wed Jun 17 2020 - 13:49:34 EST
On Wed, Jun 10, 2020 at 06:31:35PM +0200, Vlastimil Babka wrote:
> The function cache_from_obj() was added by commit b9ce5ef49f00 ("sl[au]b:
> always get the cache from its page in kmem_cache_free()") to support kmemcg,
> where per-memcg cache can be different from the root one, so we can't use
> the kmem_cache pointer given to kmem_cache_free().
>
> Prior to that commit, SLUB already had debugging check+warning that could be
> enabled to compare the given kmem_cache pointer to one referenced by the slab
> page where the object-to-be-freed resides. This check was moved to
> cache_from_obj(). Later the check was also enabled for SLAB_FREELIST_HARDENED
> configs by commit 598a0717a816 ("mm/slab: validate cache membership under
> freelist hardening").
>
> These checks and warnings can be useful especially for the debugging, which can
> be improved. Commit 598a0717a816 changed the pr_err() with WARN_ON_ONCE() to
> WARN_ONCE() so only the first hit is now reported, others are silent. This
> patch changes it to WARN() so that all errors are reported.
>
> It's also useful to print SLUB allocation/free tracking info for the offending
> object, if tracking is enabled. We could export the SLUB print_tracking()
> function and provide an empty one for SLAB, or realize that both the debugging
> and hardening cases in cache_from_obj() are only supported by SLUB anyway. So
> this patch moves cache_from_obj() from slab.h to separate instances in slab.c
> and slub.c, where the SLAB version only does the kmemcg lookup and even could
Oops. I made a mistake when I applied CONFIG_SLAB_FREELIST_HARDENED
here, I was thinking of SLAB_FREELIST_RANDOM's coverage (SLUB and SLAB),
and I see now that I never updated CONFIG_SLAB_FREELIST_HARDENED to
cover SLAB and SLOB.
The point being: I still want the sanity check for the SLAB case under
hardening. This needs to stay a common function. The whole point is
to catch corruption from the wrong kmem_cache * being associated with
an object, and that's agnostic of slab/slub/slob.
So, I'll send a follow-up to this patch to actually do what I had
originally intended for 598a0717a816 ("mm/slab: validate cache membership
under freelist hardening"), which wasn't intended to be SLUB-specific.
--
Kees Cook