Re: [PATCH mm] slub, kasan: improve interaction of KASAN and slub_debug poisoning
From: Vlastimil Babka
Date: Mon Nov 27 2023 - 06:44:49 EST
On 11/23/23 00:12, andrey.konovalov@xxxxxxxxx wrote:
> From: Andrey Konovalov <andreyknvl@xxxxxxxxxx>
>
> When both KASAN and slub_debug are enabled, when a free object is being
> prepared in setup_object, slub_debug poisons the object data before KASAN
> initializes its per-object metadata.
>
> Right now, in setup_object, KASAN only initializes the alloc metadata,
> which is always stored outside of the object. slub_debug is aware of
> this and it skips poisoning and checking that memory area.
>
> However, with the following patch in this series, KASAN also starts
> initializing its free medata in setup_object. As this metadata might be
> stored within the object, this initialization might overwrite the
> slub_debug poisoning. This leads to slub_debug reports.
>
> Thus, skip checking slub_debug poisoning of the object data area that
> overlaps with the in-object KASAN free metadata.
>
> Also make slub_debug poisoning of tail kmalloc redzones more precise when
> KASAN is enabled: slub_debug can still poison and check the tail kmalloc
> allocation area that comes after the KASAN free metadata.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@xxxxxxxxxx>
Acked-by: Vlastimil Babka <vbabka@xxxxxxx>
Thanks.
> ---
>
> Andrew, please put this patch right before "kasan: use stack_depot_put
> for Generic mode".
> ---
> mm/slub.c | 41 ++++++++++++++++++++++++++---------------
> 1 file changed, 26 insertions(+), 15 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 63d281dfacdb..782bd8a6bd34 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -870,20 +870,20 @@ static inline void set_orig_size(struct kmem_cache *s,
> void *object, unsigned int orig_size)
> {
> void *p = kasan_reset_tag(object);
> + unsigned int kasan_meta_size;
>
> if (!slub_debug_orig_size(s))
> return;
>
> -#ifdef CONFIG_KASAN_GENERIC
> /*
> - * KASAN could save its free meta data in object's data area at
> - * offset 0, if the size is larger than 'orig_size', it will
> - * overlap the data redzone in [orig_size+1, object_size], and
> - * the check should be skipped.
> + * KASAN can save its free meta data inside of the object at offset 0.
> + * If this meta data size is larger than 'orig_size', it will overlap
> + * the data redzone in [orig_size+1, object_size]. Thus, we adjust
> + * 'orig_size' to be as at least as big as KASAN's meta data.
> */
> - if (kasan_metadata_size(s, true) > orig_size)
> - orig_size = s->object_size;
> -#endif
> + kasan_meta_size = kasan_metadata_size(s, true);
> + if (kasan_meta_size > orig_size)
> + orig_size = kasan_meta_size;
>
> p += get_info_end(s);
> p += sizeof(struct track) * 2;
> @@ -1192,7 +1192,7 @@ static int check_object(struct kmem_cache *s, struct slab *slab,
> {
> u8 *p = object;
> u8 *endobject = object + s->object_size;
> - unsigned int orig_size;
> + unsigned int orig_size, kasan_meta_size;
>
> if (s->flags & SLAB_RED_ZONE) {
> if (!check_bytes_and_report(s, slab, object, "Left Redzone",
> @@ -1222,12 +1222,23 @@ static int check_object(struct kmem_cache *s, struct slab *slab,
> }
>
> if (s->flags & SLAB_POISON) {
> - if (val != SLUB_RED_ACTIVE && (s->flags & __OBJECT_POISON) &&
> - (!check_bytes_and_report(s, slab, p, "Poison", p,
> - POISON_FREE, s->object_size - 1) ||
> - !check_bytes_and_report(s, slab, p, "End Poison",
> - p + s->object_size - 1, POISON_END, 1)))
> - return 0;
> + if (val != SLUB_RED_ACTIVE && (s->flags & __OBJECT_POISON)) {
> + /*
> + * KASAN can save its free meta data inside of the
> + * object at offset 0. Thus, skip checking the part of
> + * the redzone that overlaps with the meta data.
> + */
> + kasan_meta_size = kasan_metadata_size(s, true);
> + if (kasan_meta_size < s->object_size - 1 &&
> + !check_bytes_and_report(s, slab, p, "Poison",
> + p + kasan_meta_size, POISON_FREE,
> + s->object_size - kasan_meta_size - 1))
> + return 0;
> + if (kasan_meta_size < s->object_size &&
> + !check_bytes_and_report(s, slab, p, "End Poison",
> + p + s->object_size - 1, POISON_END, 1))
> + return 0;
> + }
> /*
> * check_pad_bytes cleans up on its own.
> */