[PATCH 4.19 97/99] x86/mm/tlb: Revert "x86/mm: Align TLB invalidation info"
From: Greg Kroah-Hartman
Date: Mon May 06 2019 - 10:59:36 EST
From: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
commit 780e0106d468a2962b16b52fdf42898f2639e0a0 upstream.
Revert the following commit:
515ab7c41306: ("x86/mm: Align TLB invalidation info")
I found out (the hard way) that under some .config options (notably L1_CACHE_SHIFT=7)
and compiler combinations this on-stack alignment leads to a 320 byte
stack usage, which then triggers a KASAN stack warning elsewhere.
Using 320 bytes of stack space for a 40 byte structure is ludicrous and
clearly not right.
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Acked-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Acked-by: Nadav Amit <namit@xxxxxxxxxx>
Cc: Andy Lutomirski <luto@xxxxxxxxxx>
Cc: Borislav Petkov <bp@xxxxxxxxx>
Cc: Dave Hansen <dave.hansen@xxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Fixes: 515ab7c41306 ("x86/mm: Align TLB invalidation info")
Link: http://lkml.kernel.org/r/20190416080335.GM7905@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
[ Minor changelog edits. ]
Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
---
arch/x86/mm/tlb.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -694,7 +694,7 @@ void flush_tlb_mm_range(struct mm_struct
{
int cpu;
- struct flush_tlb_info info __aligned(SMP_CACHE_BYTES) = {
+ struct flush_tlb_info info = {
.mm = mm,
};