Re: [PATCH v5 1/2] kasan: catch invalid free before SLUB reinitializes the object

From: Andrey Konovalov
Date: Wed Jul 31 2024 - 20:50:25 EST


On Tue, Jul 30, 2024 at 1:06 PM Jann Horn <jannh@xxxxxxxxxx> wrote:
>
> Currently, when KASAN is combined with init-on-free behavior, the
> initialization happens before KASAN's "invalid free" checks.
>
> More importantly, a subsequent commit will want to RCU-delay the actual
> SLUB freeing of an object, and we'd like KASAN to still validate
> synchronously that freeing the object is permitted. (Otherwise this
> change will make the existing testcase kmem_cache_invalid_free fail.)
>
> So add a new KASAN hook that allows KASAN to pre-validate a
> kmem_cache_free() operation before SLUB actually starts modifying the
> object or its metadata.

A few more minor comments below. With that:

Reviewed-by: Andrey Konovalov <andreyknvl@xxxxxxxxx>

Thank you!

> Inside KASAN, this:
>
> - moves checks from poison_slab_object() into check_slab_free()
> - moves kasan_arch_is_ready() up into callers of poison_slab_object()
> - removes "ip" argument of poison_slab_object() and __kasan_slab_free()
> (since those functions no longer do any reporting)

> - renames check_slab_free() to check_slab_allocation()

check_slab_allocation() is introduced in this patch, so technically
you don't rename anything.

> Acked-by: Vlastimil Babka <vbabka@xxxxxxx> #slub
> Signed-off-by: Jann Horn <jannh@xxxxxxxxxx>
> ---
> include/linux/kasan.h | 43 ++++++++++++++++++++++++++++++++++---
> mm/kasan/common.c | 59 +++++++++++++++++++++++++++++++--------------------
> mm/slub.c | 7 ++++++
> 3 files changed, 83 insertions(+), 26 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 70d6a8f6e25d..34cb7a25aacb 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -172,19 +172,50 @@ static __always_inline void * __must_check kasan_init_slab_obj(
> {
> if (kasan_enabled())
> return __kasan_init_slab_obj(cache, object);
> return (void *)object;
> }
>
> -bool __kasan_slab_free(struct kmem_cache *s, void *object,
> - unsigned long ip, bool init);
> +bool __kasan_slab_pre_free(struct kmem_cache *s, void *object,
> + unsigned long ip);
> +/**
> + * kasan_slab_pre_free - Validate a slab object freeing request.
> + * @object: Object to free.
> + *
> + * This function checks whether freeing the given object might be permitted; it
> + * checks things like whether the given object is properly aligned and not
> + * already freed.
> + *
> + * This function is only intended for use by the slab allocator.
> + *
> + * @Return true if freeing the object is known to be invalid; false otherwise.
> + */

Let's reword this to:

kasan_slab_pre_free - Check whether freeing a slab object is safe.
@object: Object to be freed.

This function checks whether freeing the given object is safe. It
performs checks to detect double-free and invalid-free bugs and
reports them.

This function is intended only for use by the slab allocator.

@Return true if freeing the object is not safe; false otherwise.

> +static __always_inline bool kasan_slab_pre_free(struct kmem_cache *s,
> + void *object)
> +{
> + if (kasan_enabled())
> + return __kasan_slab_pre_free(s, object, _RET_IP_);
> + return false;
> +}
> +
> +bool __kasan_slab_free(struct kmem_cache *s, void *object, bool init);
> +/**
> + * kasan_slab_free - Possibly handle slab object freeing.
> + * @object: Object to free.
> + *
> + * This hook is called from the slab allocator to give KASAN a chance to take
> + * ownership of the object and handle its freeing.
> + * kasan_slab_pre_free() must have already been called on the same object.
> + *
> + * @Return true if KASAN took ownership of the object; false otherwise.
> + */

kasan_slab_free - Poison, initialize, and quarantine a slab object.
@object: Object to be freed.
@init: Whether to initialize the object.

This function poisons a slab object and saves a free stack trace for
it, except for SLAB_TYPESAFE_BY_RCU caches.

For KASAN modes that have integrated memory initialization
(kasan_has_integrated_init() == true), this function also initializes
the object's memory. For other modes, the @init argument is ignored.

For the Generic mode, this function might also quarantine the object.
When this happens, KASAN will defer freeing the object to a later
stage and handle it internally then. The return value indicates
whether the object was quarantined.

This function is intended only for use by the slab allocator.

@Return true if KASAN quarantined the object; false otherwise.

> static __always_inline bool kasan_slab_free(struct kmem_cache *s,
> void *object, bool init)
> {
> if (kasan_enabled())
> - return __kasan_slab_free(s, object, _RET_IP_, init);
> + return __kasan_slab_free(s, object, init);
> return false;
> }
>
> void __kasan_kfree_large(void *ptr, unsigned long ip);
> static __always_inline void kasan_kfree_large(void *ptr)
> {
> @@ -368,12 +399,18 @@ static inline void kasan_poison_new_object(struct kmem_cache *cache,
> void *object) {}
> static inline void *kasan_init_slab_obj(struct kmem_cache *cache,
> const void *object)
> {
> return (void *)object;
> }
> +
> +static inline bool kasan_slab_pre_free(struct kmem_cache *s, void *object)
> +{
> + return false;
> +}
> +
> static inline bool kasan_slab_free(struct kmem_cache *s, void *object, bool init)
> {
> return false;
> }
> static inline void kasan_kfree_large(void *ptr) {}
> static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 85e7c6b4575c..8cede1ce00e1 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -205,59 +205,65 @@ void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
> /* Tag is ignored in set_tag() without CONFIG_KASAN_SW/HW_TAGS */
> object = set_tag(object, assign_tag(cache, object, true));
>
> return (void *)object;
> }
>
> -static inline bool poison_slab_object(struct kmem_cache *cache, void *object,
> - unsigned long ip, bool init)
> +/* returns true for invalid request */

"Returns true when freeing the object is not safe."

> +static bool check_slab_allocation(struct kmem_cache *cache, void *object,
> + unsigned long ip)
> {
> - void *tagged_object;
> -
> - if (!kasan_arch_is_ready())
> - return false;
> + void *tagged_object = object;
>
> - tagged_object = object;
> object = kasan_reset_tag(object);
>
> if (unlikely(nearest_obj(cache, virt_to_slab(object), object) != object)) {
> kasan_report_invalid_free(tagged_object, ip, KASAN_REPORT_INVALID_FREE);
> return true;
> }
>
> - /* RCU slabs could be legally used after free within the RCU period. */
> - if (unlikely(cache->flags & SLAB_TYPESAFE_BY_RCU))
> - return false;
> -
> if (!kasan_byte_accessible(tagged_object)) {
> kasan_report_invalid_free(tagged_object, ip, KASAN_REPORT_DOUBLE_FREE);
> return true;
> }
>
> + return false;
> +}
> +
> +static inline void poison_slab_object(struct kmem_cache *cache, void *object,
> + bool init)
> +{
> + void *tagged_object = object;
> +
> + object = kasan_reset_tag(object);
> +
> + /* RCU slabs could be legally used after free within the RCU period. */
> + if (unlikely(cache->flags & SLAB_TYPESAFE_BY_RCU))
> + return;
> +
> kasan_poison(object, round_up(cache->object_size, KASAN_GRANULE_SIZE),
> KASAN_SLAB_FREE, init);
>
> if (kasan_stack_collection_enabled())
> kasan_save_free_info(cache, tagged_object);
> +}
>
> - return false;
> +bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object,
> + unsigned long ip)
> +{
> + if (!kasan_arch_is_ready() || is_kfence_address(object))
> + return false;
> + return check_slab_allocation(cache, object, ip);
> }
>
> -bool __kasan_slab_free(struct kmem_cache *cache, void *object,
> - unsigned long ip, bool init)
> +bool __kasan_slab_free(struct kmem_cache *cache, void *object, bool init)
> {
> - if (is_kfence_address(object))
> + if (!kasan_arch_is_ready() || is_kfence_address(object))
> return false;
>
> - /*
> - * If the object is buggy, do not let slab put the object onto the
> - * freelist. The object will thus never be allocated again and its
> - * metadata will never get released.
> - */
> - if (poison_slab_object(cache, object, ip, init))
> - return true;
> + poison_slab_object(cache, object, init);
>
> /*
> * If the object is put into quarantine, do not let slab put the object
> * onto the freelist for now. The object's metadata is kept until the
> * object gets evicted from quarantine.
> */
> @@ -503,15 +509,22 @@ bool __kasan_mempool_poison_object(void *ptr, unsigned long ip)
> kasan_poison(ptr, folio_size(folio), KASAN_PAGE_FREE, false);
> return true;
> }
>
> if (is_kfence_address(ptr))
> return false;
> + if (!kasan_arch_is_ready())
> + return true;

Hm, I think we had a bug here: the function should return true in both
cases. This seems reasonable: if KASAN is not checking the object, the
caller can do whatever they want with it.





> slab = folio_slab(folio);
> - return !poison_slab_object(slab->slab_cache, ptr, ip, false);
> +
> + if (check_slab_allocation(slab->slab_cache, ptr, ip))
> + return false;
> +
> + poison_slab_object(slab->slab_cache, ptr, false);
> + return true;
> }
>
> void __kasan_mempool_unpoison_object(void *ptr, size_t size, unsigned long ip)
> {
> struct slab *slab;
> gfp_t flags = 0; /* Might be executing under a lock. */
> diff --git a/mm/slub.c b/mm/slub.c
> index 3520acaf9afa..0c98b6a2124f 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2223,12 +2223,19 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool init)
> __kcsan_check_access(x, s->object_size,
> KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ASSERT);
>
> if (kfence_free(x))
> return false;
>
> + /*
> + * Give KASAN a chance to notice an invalid free operation before we
> + * modify the object.
> + */
> + if (kasan_slab_pre_free(s, x))
> + return false;
> +
> /*
> * As memory initialization might be integrated into KASAN,
> * kasan_slab_free and initialization memset's must be
> * kept together to avoid discrepancies in behavior.
> *
> * The initialization memset's clear the object and the metadata,
>
> --
> 2.46.0.rc1.232.g9752f9e123-goog
>