Re: [PATCH v10 01/13] kasan: sw_tags: Use arithmetic shift for shadow computation
From: Andrey Ryabinin
Date: Thu Mar 05 2026 - 14:06:01 EST
Maciej Wieczor-Retman <m.wieczorretman@xxxxx> writes:
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -558,6 +558,13 @@ static inline bool kasan_arch_is_ready(void) { return true; }
> #error kasan_arch_is_ready only works in KASAN generic outline mode!
> #endif
>
> +#ifndef arch_kasan_non_canonical_hook
> +static inline bool arch_kasan_non_canonical_hook(unsigned long addr)
> +{
> + return false;
> +}
> +#endif
> +
> #if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)
>
> void kasan_kunit_test_suite_start(void);
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index 62c01b4527eb..53152d148deb 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -642,10 +642,19 @@ void kasan_non_canonical_hook(unsigned long addr)
> const char *bug_type;
>
> /*
> - * All addresses that came as a result of the memory-to-shadow mapping
> - * (even for bogus pointers) must be >= KASAN_SHADOW_OFFSET.
> + * For Generic KASAN, kasan_mem_to_shadow() uses the logical right shift
> + * and never overflows with the chosen KASAN_SHADOW_OFFSET values. Thus,
> + * the possible shadow addresses (even for bogus pointers) belong to a
> + * single contiguous region that is the result of kasan_mem_to_shadow()
> + * applied to the whole address space.
> */
> - if (addr < KASAN_SHADOW_OFFSET)
> + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) {
> + if (addr < (unsigned long)kasan_mem_to_shadow((void *)(0ULL)) ||
> + addr > (unsigned long)kasan_mem_to_shadow((void *)(~0ULL)))
> + return;
> + }
> +
> + if (arch_kasan_non_canonical_hook(addr))
> return;
>
I've noticed that we currently classify bugs incorrectly in SW_TAGS
mode. I've sent the fix for it [1] :
[1] https://lkml.kernel.org/r/20260305185659.20807-1-ryabinin.a.a@xxxxxxxxx
While at it, I was thinking whether we can make the logic above more
arch/mode agnotstic and without per-arch hooks, so I've ended up with
the following patch (it is on top of [1] fix).
I think it should work with any arch or mode and both with signed or
unsigned shifting.
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index e804b1e1f886..1e4521b5ef14 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -640,12 +640,20 @@ void kasan_non_canonical_hook(unsigned long addr)
{
unsigned long orig_addr, user_orig_addr;
const char *bug_type;
+ void *tagged_null = set_tag(NULL, KASAN_TAG_KERNEL);
+ void *tagged_addr = set_tag((void *)addr, KASAN_TAG_KERNEL);
/*
- * All addresses that came as a result of the memory-to-shadow mapping
- * (even for bogus pointers) must be >= KASAN_SHADOW_OFFSET.
+ * Filter out addresses that cannot be shadow memory accesses generated
+ * by the compiler.
+ *
+ * In SW_TAGS mode, when computing a shadow address, the compiler always
+ * sets the kernel tag (some top bits) on the pointer *before* computing
+ * the memory-to-shadow mapping. As a result, valid shadow addresses
+ * are derived from tagged kernel pointers.
*/
- if (addr < KASAN_SHADOW_OFFSET)
+ if (tagged_addr < kasan_mem_to_shadow(tagged_null) ||
+ tagged_addr > kasan_mem_to_shadow((void *)(~0ULL)))
return;
orig_addr = (unsigned long)kasan_shadow_to_mem((void *)addr);
@@ -670,7 +678,7 @@ void kasan_non_canonical_hook(unsigned long addr)
} else if (user_orig_addr < TASK_SIZE) {
bug_type = "probably user-memory-access";
orig_addr = user_orig_addr;
- } else if (addr_in_shadow((void *)addr))
+ } else if (addr_in_shadow(tagged_addr))
bug_type = "probably wild-memory-access";
else
bug_type = "maybe wild-memory-access";
--
2.52.0