Re: 1808d65b55 ("asm-generic/tlb: Remove arch_tlb*_mmu()"): BUG: KASAN: stack-out-of-bounds in __change_page_attr_set_clr

From: Linus Torvalds
Date: Fri Apr 12 2019 - 11:33:01 EST


On Fri, Apr 12, 2019 at 3:56 AM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -728,7 +728,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
> {
> int cpu;
>
> - struct flush_tlb_info info __aligned(SMP_CACHE_BYTES) = {
> + struct flush_tlb_info info = {
> .mm = mm,
> .stride_shift = stride_shift,
> .freed_tables = freed_tables,
>

Ack.

We should never have stack alignment bigger than 16 bytes. And
preferably not even that. Trying to align stack at a cacheline
boundary is wrong - if you *really* need things to be that aligned, do
something else (regular kmalloc, percpu temp area, static allocation -
whatever).

Linus