Re: [PATCH v1 3/3] x86: call instrumentation hooks from copy_mc.c

From: Tetsuo Handa
Date: Wed Mar 20 2024 - 06:40:03 EST


On 2024/03/20 18:29, Alexander Potapenko wrote:
> But for KASAN/KCSAN we can afford more aggressive checks.
> First, if we postpone them after the actual memory accesses happen,
> the kernel may panic on the invalid access without a decent error
> report.
> Second, even if in a particular case only `len-ret` bytes were copied,
> the caller probably expected both `src` and `dst` to have `len`
> addressable bytes.
> Checking for the whole length in this case is more likely to detect a
> real error than produce a false positive.

KASAN/KCSAN care about whether the requested address range is accessible but
do not care about whether the requested address range was actually accessed?

By the way, we have the same problem for copy_page() and I was thinking about
https://lkml.kernel.org/r/1a817eb5-7cd8-44d6-b409-b3bc3f377cb9@xxxxxxxxxxxxxxxxxxx .
But given that instrument_memcpy_{before,after} are added,
how do we want to use instrument_memcpy_{before,after} for copy_page() ?
Should we rename assembly version of copy_page() so that we don't need to use
tricky wrapping like below?

diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h
index cc6b8e087192..b9b794656880 100644
--- a/arch/x86/include/asm/page_64.h
+++ b/arch/x86/include/asm/page_64.h
@@ -9,6 +9,7 @@
#include <asm/alternative.h>

#include <linux/kmsan-checks.h>
+#include <linux/instrumented.h>

/* duplicated to the one in bootmem.h */
extern unsigned long max_pfn;
@@ -59,6 +60,13 @@ static inline void clear_page(void *page)
}

void copy_page(void *to, void *from);
+#define copy_page(to, from) do { \
+ void *_to = (to); \
+ void *_from = (from); \
+ instrument_memcpy_before(_to, _from, PAGE_SIZE); \
+ copy_page(_to, _from); \
+ instrument_memcpy_after(_to, _from, PAGE_SIZE, 0); \
+} while (0)

#ifdef CONFIG_X86_5LEVEL
/*