Re: [PATCH] kcsan:fix alignment_fault when read unaligned instrumented memory

From: Marco Elver
Date: Wed Mar 08 2023 - 02:54:38 EST


On Wed, 8 Mar 2023 at 03:21, 'Haibo Li' via kasan-dev
<kasan-dev@xxxxxxxxxxxxxxxx> wrote:
>
> After enable kcsan on arm64+linux-5.15,it reports alignment_fault
> when access unaligned address.

Is this KCSAN's fault or the fault of the code being instrumented?
I.e. if you disable KCSAN, is there still an alignment fault reported?

Because as-is, I don't understand how the instrumentation alone will
cause an alignment fault, because for every normal memory access there
is a corresponding instrumented access - therefore, that'd suggest
that the real access was also unaligned. Note that the compiler
inserts instrumentation _before_ the actual access, so if there's a
problem, that problem will manifest inside KCSAN.

Can you provide more information about what's going on (type of
access, size of access, etc.)?

> Here is the oops log:
> "
> Trying to unpack rootfs image as initramfs.....
> Unable to handle kernel paging request at virtual address
> ffffff802a0d8d7171
> Mem abort info:o:
> ESR = 0x9600002121
> EC = 0x25: DABT (current EL), IL = 32 bitsts
> SET = 0, FnV = 0 0
> EA = 0, S1PTW = 0 0
> FSC = 0x21: alignment fault
> Data abort info:o:
> ISV = 0, ISS = 0x0000002121
> CM = 0, WnR = 0 0
> swapper pgtable: 4k pages, 39-bit VAs, pgdp=000000002835200000
> [ffffff802a0d8d71] pgd=180000005fbf9003, p4d=180000005fbf9003,
> pud=180000005fbf9003, pmd=180000005fbe8003, pte=006800002a0d8707
> Internal error: Oops: 96000021 [#1] PREEMPT SMP
> Modules linked in:
> CPU: 2 PID: 45 Comm: kworker/u8:2 Not tainted
> 5.15.78-android13-8-g63561175bbda-dirty #1
> ...
> pc : kcsan_setup_watchpoint+0x26c/0x6bc
> lr : kcsan_setup_watchpoint+0x88/0x6bc
> sp : ffffffc00ab4b7f0
> x29: ffffffc00ab4b800 x28: ffffff80294fe588 x27: 0000000000000001
> x26: 0000000000000019 x25: 0000000000000001 x24: ffffff80294fdb80
> x23: 0000000000000000 x22: ffffffc00a70fb68 x21: ffffff802a0d8d71
> x20: 0000000000000002 x19: 0000000000000000 x18: ffffffc00a9bd060
> x17: 0000000000000001 x16: 0000000000000000 x15: ffffffc00a59f000
> x14: 0000000000000001 x13: 0000000000000000 x12: ffffffc00a70faa0
> x11: 00000000aaaaaaab x10: 0000000000000054 x9 : ffffffc00839adf8
> x8 : ffffffc009b4cf00 x7 : 0000000000000000 x6 : 0000000000000007
> x5 : 0000000000000000 x4 : 0000000000000000 x3 : ffffffc00a70fb70
> x2 : 0005ff802a0d8d71 x1 : 0000000000000000 x0 : 0000000000000000
> Call trace:
> kcsan_setup_watchpoint+0x26c/0x6bc
> __tsan_read2+0x1f0/0x234
> inflate_fast+0x498/0x750

^^ is it possible that an access in "inflate_fast" is unaligned?

> zlib_inflate+0x1304/0x2384
> __gunzip+0x3a0/0x45c
> gunzip+0x20/0x30
> unpack_to_rootfs+0x2a8/0x3fc
> do_populate_rootfs+0xe8/0x11c
> async_run_entry_fn+0x58/0x1bc
> process_one_work+0x3ec/0x738
> worker_thread+0x4c4/0x838
> kthread+0x20c/0x258
> ret_from_fork+0x10/0x20
> Code: b8bfc2a8 2a0803f7 14000007 d503249f (78bfc2a8) )
> ---[ end trace 613a943cb0a572b6 ]-----
> "
>
> After checking linux 6.3-rc1 on QEMU arm64,it still has the possibility
> to read unaligned address in read_instrumented_memory(qemu can not
> emulate alignment fault)
>
> To fix alignment fault and read the value of instrumented memory
> more effective,bypass the unaligned access in read_instrumented_memory.
>
> Signed-off-by: Haibo Li <haibo.li@xxxxxxxxxxxx>
> ---
> kernel/kcsan/core.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c
> index 54d077e1a2dc..88e75d7d85d2 100644
> --- a/kernel/kcsan/core.c
> +++ b/kernel/kcsan/core.c
> @@ -337,6 +337,11 @@ static void delay_access(int type)
> */
> static __always_inline u64 read_instrumented_memory(const volatile void *ptr, size_t size)
> {
> + bool aligned_read = (size == 1) || IS_ALIGNED((unsigned long)ptr, size);

(size==1) check is redundant because IS_ALIGNED(.., 1) should always
return true.

And this will also penalize other architectures which can do unaligned
accesses. So this check probably wants to be guarded by
"IS_ENABLED(CONFIG_ARM64)" or something.

> + if (!aligned_read)
> + return 0;
> +
> switch (size) {
> case 1: return READ_ONCE(*(const u8 *)ptr);
> case 2: return READ_ONCE(*(const u16 *)ptr);