[PATCH 4.4 77/86] x86: Fix early boot crash on gcc-10, third try
From: Greg Kroah-Hartman
Date: Mon May 18 2020 - 13:41:28 EST
From: Borislav Petkov <bp@xxxxxxx>
commit a9a3ed1eff3601b63aea4fb462d8b3b92c7c1e7e upstream.
... or the odyssey of trying to disable the stack protector for the
function which generates the stack canary value.
The whole story started with Sergei reporting a boot crash with a kernel
built with gcc-10:
Kernel panic â not syncing: stack-protector: Kernel stack is corrupted in: start_secondary
CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.6.0-rc5â00235âgfffb08b37df9 #139
Hardware name: Gigabyte Technology Co., Ltd. To be filled by O.E.M./H77MâD3H, BIOS F12 11/14/2013
Call Trace:
dump_stack
panic
? start_secondary
__stack_chk_fail
start_secondary
secondary_startup_64
-â-[ end Kernel panic â not syncing: stackâprotector: Kernel stack is corrupted in: start_secondary
This happens because gcc-10 tail-call optimizes the last function call
in start_secondary() - cpu_startup_entry() - and thus emits a stack
canary check which fails because the canary value changes after the
boot_init_stack_canary() call.
To fix that, the initial attempt was to mark the one function which
generates the stack canary with:
__attribute__((optimize("-fno-stack-protector"))) ... start_secondary(void *unused)
however, using the optimize attribute doesn't work cumulatively
as the attribute does not add to but rather replaces previously
supplied optimization options - roughly all -fxxx options.
The key one among them being -fno-omit-frame-pointer and thus leading to
not present frame pointer - frame pointer which the kernel needs.
The next attempt to prevent compilers from tail-call optimizing
the last function call cpu_startup_entry(), shy of carving out
start_secondary() into a separate compilation unit and building it with
-fno-stack-protector, was to add an empty asm("").
This current solution was short and sweet, and reportedly, is supported
by both compilers but we didn't get very far this time: future (LTO?)
optimization passes could potentially eliminate this, which leads us
to the third attempt: having an actual memory barrier there which the
compiler cannot ignore or move around etc.
That should hold for a long time, but hey we said that about the other
two solutions too so...
Reported-by: Sergei Trofimovich <slyfox@xxxxxxxxxx>
Signed-off-by: Borislav Petkov <bp@xxxxxxx>
Tested-by: Kalle Valo <kvalo@xxxxxxxxxxxxxx>
Cc: <stable@xxxxxxxxxxxxxxx>
Link: https://lkml.kernel.org/r/20200314164451.346497-1-slyfox@xxxxxxxxxx
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
---
arch/x86/include/asm/stackprotector.h | 7 ++++++-
arch/x86/kernel/smpboot.c | 8 ++++++++
arch/x86/xen/smp.c | 1 +
include/linux/compiler.h | 7 +++++++
init/main.c | 2 ++
5 files changed, 24 insertions(+), 1 deletion(-)
--- a/arch/x86/include/asm/stackprotector.h
+++ b/arch/x86/include/asm/stackprotector.h
@@ -54,8 +54,13 @@
/*
* Initialize the stackprotector canary value.
*
- * NOTE: this must only be called from functions that never return,
+ * NOTE: this must only be called from functions that never return
* and it must always be inlined.
+ *
+ * In addition, it should be called from a compilation unit for which
+ * stack protector is disabled. Alternatively, the caller should not end
+ * with a function call which gets tail-call optimized as that would
+ * lead to checking a modified canary value.
*/
static __always_inline void boot_init_stack_canary(void)
{
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -243,6 +243,14 @@ static void notrace start_secondary(void
wmb();
cpu_startup_entry(CPUHP_ONLINE);
+
+ /*
+ * Prevent tail call to cpu_startup_entry() because the stack protector
+ * guard has been changed a couple of function calls up, in
+ * boot_init_stack_canary() and must not be checked before tail calling
+ * another function.
+ */
+ prevent_tail_call_optimization();
}
void __init smp_store_boot_cpu_info(void)
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -116,6 +116,7 @@ asmlinkage __visible void cpu_bringup_an
#endif
cpu_bringup();
cpu_startup_entry(CPUHP_ONLINE);
+ prevent_tail_call_optimization();
}
static void xen_smp_intr_free(unsigned int cpu)
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -556,4 +556,11 @@ static __always_inline void __write_once
# define __kprobes
# define nokprobe_inline inline
#endif
+
+/*
+ * This is needed in functions which generate the stack canary, see
+ * arch/x86/kernel/smpboot.c::start_secondary() for an example.
+ */
+#define prevent_tail_call_optimization() mb()
+
#endif /* __LINUX_COMPILER_H */
--- a/init/main.c
+++ b/init/main.c
@@ -683,6 +683,8 @@ asmlinkage __visible void __init start_k
/* Do the rest non-__init'ed, we're now alive */
rest_init();
+
+ prevent_tail_call_optimization();
}
/* Call all constructor functions linked into the kernel. */