Re: [PATCH v1 1/2] kasan: Unpoison pcpu chunks with base address tag
From: Lorenzo Stoakes
Date: Wed Nov 05 2025 - 17:01:06 EST
Hi,
This patch is breaking the build for mm-new with KASAN enabled:
mm/kasan/common.c:587:6: error: no previous prototype for ‘__kasan_unpoison_vmap_areas’ [-Werror=missing-prototypes]
587 | void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
Looks to be because CONFIG_KASAN_VMALLOC is not set in my configuration, so you
probably need to do:
#ifdef CONFIG_KASAN_VMALLOC
void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
{
int area;
for (area = 0 ; area < nr_vms ; area++) {
kasan_poison(vms[area]->addr, vms[area]->size,
arch_kasan_get_tag(vms[area]->addr), false);
}
}
#endif
That fixes the build for me.
Andrew - can we maybe apply this just to fix the build as a work around until
Maciej has a chance to see if he agrees with this fix?
Thanks, Lorenzo
On Tue, Nov 04, 2025 at 02:49:08PM +0000, Maciej Wieczor-Retman wrote:
> From: Maciej Wieczor-Retman <maciej.wieczor-retman@xxxxxxxxx>
>
> A KASAN tag mismatch, possibly causing a kernel panic, can be observed
> on systems with a tag-based KASAN enabled and with multiple NUMA nodes.
> It was reported on arm64 and reproduced on x86. It can be explained in
> the following points:
>
> 1. There can be more than one virtual memory chunk.
> 2. Chunk's base address has a tag.
> 3. The base address points at the first chunk and thus inherits
> the tag of the first chunk.
> 4. The subsequent chunks will be accessed with the tag from the
> first chunk.
> 5. Thus, the subsequent chunks need to have their tag set to
> match that of the first chunk.
>
> Refactor code by moving it into a helper in preparation for the actual
> fix.
>
> Fixes: 1d96320f8d53 ("kasan, vmalloc: add vmalloc tagging for SW_TAGS")
> Cc: <stable@xxxxxxxxxxxxxxx> # 6.1+
> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@xxxxxxxxx>
> Tested-by: Baoquan He <bhe@xxxxxxxxxx>
> ---
> Changelog v1 (after splitting of from the KASAN series):
> - Rewrite first paragraph of the patch message to point at the user
> impact of the issue.
> - Move helper to common.c so it can be compiled in all KASAN modes.
>
> include/linux/kasan.h | 10 ++++++++++
> mm/kasan/common.c | 11 +++++++++++
> mm/vmalloc.c | 4 +---
> 3 files changed, 22 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index d12e1a5f5a9a..b00849ea8ffd 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -614,6 +614,13 @@ static __always_inline void kasan_poison_vmalloc(const void *start,
> __kasan_poison_vmalloc(start, size);
> }
>
> +void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms);
> +static __always_inline void kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
> +{
> + if (kasan_enabled())
> + __kasan_unpoison_vmap_areas(vms, nr_vms);
> +}
> +
> #else /* CONFIG_KASAN_VMALLOC */
>
> static inline void kasan_populate_early_vm_area_shadow(void *start,
> @@ -638,6 +645,9 @@ static inline void *kasan_unpoison_vmalloc(const void *start,
> static inline void kasan_poison_vmalloc(const void *start, unsigned long size)
> { }
>
> +static inline void kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
> +{ }
> +
> #endif /* CONFIG_KASAN_VMALLOC */
>
> #if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index d4c14359feaf..c63544a98c24 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -28,6 +28,7 @@
> #include <linux/string.h>
> #include <linux/types.h>
> #include <linux/bug.h>
> +#include <linux/vmalloc.h>
>
> #include "kasan.h"
> #include "../slab.h"
> @@ -582,3 +583,13 @@ bool __kasan_check_byte(const void *address, unsigned long ip)
> }
> return true;
> }
> +
> +void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
> +{
> + int area;
> +
> + for (area = 0 ; area < nr_vms ; area++) {
> + kasan_poison(vms[area]->addr, vms[area]->size,
> + arch_kasan_get_tag(vms[area]->addr), false);
> + }
> +}
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 798b2ed21e46..934c8bfbcebf 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -4870,9 +4870,7 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
> * With hardware tag-based KASAN, marking is skipped for
> * non-VM_ALLOC mappings, see __kasan_unpoison_vmalloc().
> */
> - for (area = 0; area < nr_vms; area++)
> - vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr,
> - vms[area]->size, KASAN_VMALLOC_PROT_NORMAL);
> + kasan_unpoison_vmap_areas(vms, nr_vms);
>
> kfree(vas);
> return vms;
> --
> 2.51.0
>
>
>