Re: [PATCH v3 06/10] kfence, kasan: make KFENCE compatible with KASAN

From: Andrey Konovalov
Date: Tue Sep 29 2020 - 08:21:22 EST


On Mon, Sep 21, 2020 at 3:26 PM Marco Elver <elver@xxxxxxxxxx> wrote:
>
> From: Alexander Potapenko <glider@xxxxxxxxxx>
>
> We make KFENCE compatible with KASAN for testing KFENCE itself. In
> particular, KASAN helps to catch any potential corruptions to KFENCE
> state, or other corruptions that may be a result of freepointer
> corruptions in the main allocators.
>
> To indicate that the combination of the two is generally discouraged,
> CONFIG_EXPERT=y should be set. It also gives us the nice property that
> KFENCE will be build-tested by allyesconfig builds.
>
> Reviewed-by: Dmitry Vyukov <dvyukov@xxxxxxxxxx>
> Co-developed-by: Marco Elver <elver@xxxxxxxxxx>
> Signed-off-by: Marco Elver <elver@xxxxxxxxxx>
> Signed-off-by: Alexander Potapenko <glider@xxxxxxxxxx>
> ---
> lib/Kconfig.kfence | 2 +-
> mm/kasan/common.c | 7 +++++++
> 2 files changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/lib/Kconfig.kfence b/lib/Kconfig.kfence
> index 4c2ea1c722de..6825c1c07a10 100644
> --- a/lib/Kconfig.kfence
> +++ b/lib/Kconfig.kfence
> @@ -10,7 +10,7 @@ config HAVE_ARCH_KFENCE_STATIC_POOL
>
> menuconfig KFENCE
> bool "KFENCE: low-overhead sampling-based memory safety error detector"
> - depends on HAVE_ARCH_KFENCE && !KASAN && (SLAB || SLUB)
> + depends on HAVE_ARCH_KFENCE && (!KASAN || EXPERT) && (SLAB || SLUB)
> depends on JUMP_LABEL # To ensure performance, require jump labels
> select STACKTRACE
> help
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 950fd372a07e..f5c49f0fdeff 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -18,6 +18,7 @@
> #include <linux/init.h>
> #include <linux/kasan.h>
> #include <linux/kernel.h>
> +#include <linux/kfence.h>
> #include <linux/kmemleak.h>
> #include <linux/linkage.h>
> #include <linux/memblock.h>
> @@ -396,6 +397,9 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
> tagged_object = object;
> object = reset_tag(object);
>
> + if (is_kfence_address(object))
> + return false;
> +
> if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) !=
> object)) {
> kasan_report_invalid_free(tagged_object, ip);
> @@ -444,6 +448,9 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
> if (unlikely(object == NULL))
> return NULL;
>
> + if (is_kfence_address(object))
> + return (void *)object;
> +
> redzone_start = round_up((unsigned long)(object + size),
> KASAN_SHADOW_SCALE_SIZE);
> redzone_end = round_up((unsigned long)object + cache->object_size,
> --
> 2.28.0.681.g6f77f65b4e-goog
>

With KFENCE + KASAN both enabled we need to bail out in all KASAN
hooks that get called from the allocator, right? Do I understand
correctly that these two are the only ones that are called for
KFENCE-allocated objects due to the way KFENCE is integrated into the
allocator?