Re: [tip: x86/bugs] x86/retpoline: Ensure default return thunk isn't used at runtime
From: Josh Poimboeuf
Date: Thu Oct 19 2023 - 11:20:59 EST
On Thu, Oct 19, 2023 at 04:39:51PM +0200, Borislav Petkov wrote:
> On Thu, Oct 19, 2023 at 02:21:40PM +0000, Kaplan, David wrote:
> > The return thunk is used for all functions though, including assembly
> > coded functions which may use non-standard calling conventions and
> > aren't visible to gcc. I think the only safe thing would be to
> > preserve all GPRs across the call to check_thunks. Something like
> > PUSH_REGS/call check_thunks/POP_REGS.
>
> That call nop will be inside the return thunk. I.e., something like
> this:
>
> SYM_CODE_START(__x86_return_thunk)
> UNWIND_HINT_FUNC
> ANNOTATE_NOENDBR
> ANNOTATE_UNRET_SAFE
> ALTERNATIVE CALL nop, check_thunks, X86_FEATURE_ALWAYS
> ret
> int3
> SYM_CODE_END(__x86_return_thunk)
> EXPORT_SYMBOL(__x86_return_thunk)
>
> I suspect that gcc doesn't know that there is a function call in the asm
> there, which is also what you hint at - I need to ask a compiler guy.
>
> But yeah, if it doesn't, then we'll need to push/pop regs as you
> suggest.
GCC doesn't read asm. Even if it did that wouldn't fix things for
callers of custom-ABI return-thunk-using functions.
The below seems to work.
diff --git a/arch/x86/entry/thunk_64.S b/arch/x86/entry/thunk_64.S
index 27b5da2111ac..54c043e010f9 100644
--- a/arch/x86/entry/thunk_64.S
+++ b/arch/x86/entry/thunk_64.S
@@ -46,3 +46,5 @@ THUNK preempt_schedule_thunk, preempt_schedule
THUNK preempt_schedule_notrace_thunk, preempt_schedule_notrace
EXPORT_SYMBOL(preempt_schedule_thunk)
EXPORT_SYMBOL(preempt_schedule_notrace_thunk)
+
+THUNK warn_thunk_thunk, __warn_thunk
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 14cd3cd5f85a..315e3f9410b2 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -357,6 +357,8 @@ extern void entry_ibpb(void);
extern void (*x86_return_thunk)(void);
+extern void __warn_thunk(void);
+
#ifdef CONFIG_CALL_DEPTH_TRACKING
extern void call_depth_return_thunk(void);
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index bb0ab8466b91..7d89fe7a2e69 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2849,3 +2849,8 @@ ssize_t cpu_show_gds(struct device *dev, struct device_attribute *attr, char *bu
return cpu_show_common(dev, attr, buf, X86_BUG_GDS);
}
#endif
+
+void __warn_thunk(void)
+{
+ WARN_ONCE(1, "unpatched return thunk");
+}
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index fe05c139db48..389662b88e19 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -360,13 +360,14 @@ SYM_FUNC_END(call_depth_return_thunk)
* 'JMP __x86_return_thunk' sites are changed to something else by
* apply_returns().
*
- * This thunk is turned into a ud2 to ensure it is never used at runtime.
- * Alternative instructions are applied after apply_returns().
+ * The RET is replaced with a WARN_ONCE() to ensure it is never used at
+ * runtime. Alternative instructions are applied after apply_returns().
*/
SYM_CODE_START(__x86_return_thunk)
UNWIND_HINT_FUNC
ANNOTATE_NOENDBR
- ALTERNATIVE __stringify(ANNOTATE_UNRET_SAFE;ret),"ud2", X86_FEATURE_ALWAYS
+ ALTERNATIVE __stringify(ANNOTATE_UNRET_SAFE; ret), \
+ "jmp warn_thunk_thunk", X86_FEATURE_ALWAYS
int3
SYM_CODE_END(__x86_return_thunk)
EXPORT_SYMBOL(__x86_return_thunk)