Re: [PATCH RESEND] mm, kasan: don't poison boot memory
From: Catalin Marinas
Date: Fri Feb 19 2021 - 11:36:12 EST
On Thu, Feb 18, 2021 at 09:24:49PM +0100, Andrey Konovalov wrote:
> On Thu, Feb 18, 2021 at 11:46 AM Catalin Marinas
> <catalin.marinas@xxxxxxx> wrote:
> >
> > The approach looks fine to me. If you don't like the trade-off, I think
> > you could still leave the kasan poisoning in if CONFIG_DEBUG_KERNEL.
>
> This won't work, Android enables CONFIG_DEBUG_KERNEL in GKI as it
> turns out :)
And does this option go into production kernels?
> > For MTE, we could look at optimising the poisoning code for page size to
> > use STGM or DC GZVA but I don't think we can make it unnoticeable for
> > large systems (especially with DC GZVA, that's like zeroing the whole
> > RAM at boot).
>
> https://bugzilla.kernel.org/show_bug.cgi?id=211817
A quick hack here if you can give it a try. It can be made more optimal,
maybe calling the set_mem_tag_page directly from kasan:
diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h
index 7ab500e2ad17..b9b9ca1976eb 100644
--- a/arch/arm64/include/asm/mte-kasan.h
+++ b/arch/arm64/include/asm/mte-kasan.h
@@ -48,6 +48,20 @@ static inline u8 mte_get_random_tag(void)
return mte_get_ptr_tag(addr);
}
+static inline void __mte_set_mem_tag_page(u64 curr, u64 end)
+{
+ u64 bs = 4 << (read_cpuid(DCZID_EL0) & 0xf);
+
+ do {
+ asm volatile(__MTE_PREAMBLE "dc gva, %0"
+ :
+ : "r" (curr)
+ : "memory");
+
+ curr += bs;
+ } while (curr != end);
+}
+
/*
* Assign allocation tags for a region of memory based on the pointer tag.
* Note: The address must be non-NULL and MTE_GRANULE_SIZE aligned and
@@ -63,6 +77,11 @@ static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag)
curr = (u64)__tag_set(addr, tag);
end = curr + size;
+ if (IS_ALIGNED((unsigned long)addr, PAGE_SIZE) && size == PAGE_SIZE) {
+ __mte_set_mem_tag_page(curr, end);
+ return;
+ }
+
do {
/*
* 'asm volatile' is required to prevent the compiler to move