Re: [RFC/PATCH v2 02/10] x86_64: add KASan support
From: Andrey Ryabinin
Date: Wed Sep 10 2014 - 16:30:35 EST
2014-09-10 19:46 GMT+04:00 Dave Hansen <dave.hansen@xxxxxxxxx>:
> Overall, the approach here looks pretty sane. As you noted, it would be
> nice to keep PAGE_OFFSET in one place, but it's not a deal breaker for
> me. The use of the vmemmap code looks to be a nice fit.
>
> Few nits below.
>
> On 09/10/2014 07:31 AM, Andrey Ryabinin wrote:
>> 16TB of virtual addressed used for shadow memory.
>> It's located in range [0xffff800000000000 - 0xffff900000000000]
>> Therefore PAGE_OFFSET has to be changed from 0xffff880000000000
>> to 0xffff900000000000.
> ...
>> It would be nice to not have different PAGE_OFFSET with and without CONFIG_KASAN.
>> We have big enough hole between vmemmap and esp fixup stacks.
>> So how about moving all direct mapping, vmalloc and vmemmap 8TB up without
>> hiding it under CONFIG_KASAN?
>
> Is there a reason this has to be _below_ the linear map? Couldn't we
> just carve some space out of the vmalloc() area for the kasan area?
>
Yes, there is a reason for this. For inline instrumentation we need to
catch access to userspace without any additional check.
This means that we need shadow of 1 << 61 bytes and we don't have so
many addresses available. However, we could use
hole between userspace and kernelspace for that. For any address
between [0 - 0xffff800000000000], shadow address will be
in this hole, so checking shadow value will produce general protection
fault (GPF). We may even try handle GPF in a special way
and print more user-friendly report (this will be under CONFIG_KASAN of course).
But now I realized that we even if we put shadow in vmalloc, shadow
addresses corresponding to userspace addresses
still will be in between userspace - kernelspace, so we also will get GPF.
There is the only problem I see now in such approach. Lets consider
that because of some bug in kernel we are trying to access
memory slightly bellow 0xffff800000000000. In this case kasan will try
to check some shadow which in fact is not a shadow byte at all.
It's not a big deal though, kernel will crash anyway. In only means
that debugging of such problems could be a little more complex
than without kasan.
>
>> arch/x86/Kconfig | 1 +
>> arch/x86/boot/Makefile | 2 ++
>> arch/x86/boot/compressed/Makefile | 2 ++
>> arch/x86/include/asm/kasan.h | 20 ++++++++++++
>> arch/x86/include/asm/page_64_types.h | 4 +++
>> arch/x86/include/asm/pgtable.h | 7 ++++-
>> arch/x86/kernel/Makefile | 2 ++
>> arch/x86/kernel/dumpstack.c | 5 ++-
>> arch/x86/kernel/head64.c | 6 ++++
>> arch/x86/kernel/head_64.S | 16 ++++++++++
>> arch/x86/mm/Makefile | 3 ++
>> arch/x86/mm/init.c | 3 ++
>> arch/x86/mm/kasan_init_64.c | 59 ++++++++++++++++++++++++++++++++++++
>> arch/x86/realmode/Makefile | 2 +-
>> arch/x86/realmode/rm/Makefile | 1 +
>> arch/x86/vdso/Makefile | 1 +
>> include/linux/kasan.h | 3 ++
>> lib/Kconfig.kasan | 1 +
>> 18 files changed, 135 insertions(+), 3 deletions(-)
>> create mode 100644 arch/x86/include/asm/kasan.h
>> create mode 100644 arch/x86/mm/kasan_init_64.c
>
> This probably deserves an update of Documentation/x86/x86_64/mm.txt, too.
>
Sure, I didn't bother to do it now in case if memory layout changes in
this patch not final.
>> +void __init kasan_map_shadow(void)
>> +{
>> + int i;
>> +
>> + memcpy(early_level4_pgt, init_level4_pgt, 4096);
>> + load_cr3(early_level4_pgt);
>> +
>> + clear_zero_shadow_mapping(kasan_mem_to_shadow(PAGE_OFFSET),
>> + kasan_mem_to_shadow(0xffffc80000000000UL));
>
> This 0xffffc80000000000UL could be PAGE_OFFSET+MAXMEM.
>
>
>
--
Best regards,
Andrey Ryabinin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/