Re: linux/master crashes on boot with KASAN=y
From: Andrey Ryabinin
Date: Tue Dec 26 2017 - 06:55:26 EST
On 12/24/2017 04:48 AM, Andy Lutomirski wrote:
> On Sat, Dec 23, 2017 at 4:41 AM, Andrey Ryabinin
> <aryabinin@xxxxxxxxxxxxx> wrote:
>> On 12/23/2017 11:01 AM, Jakub Kicinski wrote:
>>> Hi!
>>>
>>> I bisected a crash on boot to this:
>>>
>>> commit 21506525fb8ddb0342f2a2370812d47f6a1f3833 (HEAD, refs/bisect/bad)
>>> Author: Andy Lutomirski <luto@xxxxxxxxxx>
>>> Date: Mon Dec 4 15:07:16 2017 +0100
>>>
>>> x86/kasan/64: Teach KASAN about the cpu_entry_area
>>
>>
>> Thanks.
>> There is nothing wrong with this patch, it just uncovered older bug.
>> The actual problem comes from f06bdd4001c2 ("x86/mm: Adapt MODULES_END based on fixmap section size")
>> which is made kasan_mem_to_shadow(MODULES_END) potentially unaligned to page boundary.
>> And when we feed unaligned address to kasan_populate_zero_shadow() it doesn't do the right thing.
>>
>> Could you tell me if the fix bellow works for you?
>>
>> ---
>> arch/x86/include/asm/kasan.h | 8 ++++++++
>> arch/x86/include/asm/pgtable_64_types.h | 4 +++-
>> 2 files changed, 11 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
>> index b577dd0916aa..0c580e4b2ccc 100644
>> --- a/arch/x86/include/asm/kasan.h
>> +++ b/arch/x86/include/asm/kasan.h
>> @@ -5,6 +5,14 @@
>> #include <linux/const.h>
>> #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
>>
>> +#ifndef KASAN_SHADOW_SCALE_SHIFT
>> +# ifdef CONFIG_KASAN
>> +# define KASAN_SHADOW_SCALE_SHIFT 3
>> +# else
>> +# define KASAN_SHADOW_SCALE_SHIFT 0
>> +# endif
>> +#endif
>> +
>> /*
>> * Compiler uses shadow offset assuming that addresses start
>> * from 0. Kernel addresses don't start from 0, so shadow
>> diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h
>> index 6d5f45dcd4a1..d34a90d6c374 100644
>> --- a/arch/x86/include/asm/pgtable_64_types.h
>> +++ b/arch/x86/include/asm/pgtable_64_types.h
>> @@ -6,6 +6,7 @@
>>
>> #ifndef __ASSEMBLY__
>> #include <linux/types.h>
>> +#include <asm/kasan.h>
>> #include <asm/kaslr.h>
>>
>> /*
>> @@ -96,7 +97,8 @@ typedef struct { pteval_t pte; } pte_t;
>> #define VMALLOC_END (VMALLOC_START + _AC((VMALLOC_SIZE_TB << 40) - 1, UL))
>> #define MODULES_VADDR (__START_KERNEL_map + KERNEL_IMAGE_SIZE)
>> /* The module sections ends with the start of the fixmap */
>> -#define MODULES_END __fix_to_virt(__end_of_fixed_addresses + 1)
>> +#define MODULES_END (__fix_to_virt(__end_of_fixed_addresses + 1) & \
>> + ~((PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT) - 1))
>
> Could this be #define MODULES_END KASAN_ROUND_DOWN(__fix_to_virt(...)) instead?
>
Actually, we could simply set fixed MODULES_END, as it was before f06bdd4001c2 ("x86/mm: Adapt MODULES_END based on fixmap section size").
AFAICS, the whole point of f06bdd4001c2 was to move MODULES_END down if NR_CPUS is big.
But since 92a0f81d8957 ("x86/cpu_entry_area: Move it out of the fixmap") cpu_entry_area is not in fixmap anymore.
So it should be fine to set fixed MODULES_END.
The only concern I have is 4.14 stable, where 21506525f ("x86/kasan/64: Teach KASAN about the cpu_entry_area") was backported.
Is 92a0f81d8957 ("x86/cpu_entry_area: Move it out of the fixmap") also a candidate for stable?
If so, fixed MODULES_END seems like a better choice.