[tip: x86/urgent] x86/bugs: Fix RSB clearing in indirect_branch_prediction_barrier()
From: tip-bot2 for Josh Poimboeuf
Date: Wed Apr 09 2025 - 07:45:51 EST
The following commit has been merged into the x86/urgent branch of tip:
Commit-ID: b1b19cfcf4656c75088dc06b7499f493e0dec3e5
Gitweb: https://git.kernel.org/tip/b1b19cfcf4656c75088dc06b7499f493e0dec3e5
Author: Josh Poimboeuf <jpoimboe@xxxxxxxxxx>
AuthorDate: Tue, 08 Apr 2025 14:47:32 -07:00
Committer: Ingo Molnar <mingo@xxxxxxxxxx>
CommitterDate: Wed, 09 Apr 2025 12:41:30 +02:00
x86/bugs: Fix RSB clearing in indirect_branch_prediction_barrier()
IBPB is expected to clear the RSB. However, if X86_BUG_IBPB_NO_RET is
set, that doesn't happen. Make indirect_branch_prediction_barrier()
take that into account by calling write_ibpb() which clears RSB on
X86_BUG_IBPB_NO_RET:
/* Make sure IBPB clears return stack preductions too. */
FILL_RETURN_BUFFER %rax, RSB_CLEAR_LOOPS, X86_BUG_IBPB_NO_RET
Note that, as of the previous patch, write_ibpb() also reads
'x86_pred_cmd' in order to use SBPB when applicable:
movl _ASM_RIP(x86_pred_cmd), %eax
Therefore that existing behavior in indirect_branch_prediction_barrier()
is not lost.
Fixes: 50e4b3b94090 ("x86/entry: Have entry_ibpb() invalidate return predictions")
Signed-off-by: Josh Poimboeuf <jpoimboe@xxxxxxxxxx>
Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
Reviewed-by: Nikolay Borisov <nik.borisov@xxxxxxxx>
Link: https://lore.kernel.org/r/bba68888c511743d4cd65564d1fc41438907523f.1744148254.git.jpoimboe@xxxxxxxxxx
---
arch/x86/include/asm/nospec-branch.h | 6 +++---
arch/x86/kernel/cpu/bugs.c | 1 -
2 files changed, 3 insertions(+), 4 deletions(-)
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 591d1db..5c43f14 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -514,11 +514,11 @@ void alternative_msr_write(unsigned int msr, u64 val, unsigned int feature)
: "memory");
}
-extern u64 x86_pred_cmd;
-
static inline void indirect_branch_prediction_barrier(void)
{
- alternative_msr_write(MSR_IA32_PRED_CMD, x86_pred_cmd, X86_FEATURE_IBPB);
+ asm_inline volatile(ALTERNATIVE("", "call write_ibpb", X86_FEATURE_IBPB)
+ : ASM_CALL_CONSTRAINT
+ :: "rax", "rcx", "rdx", "memory");
}
/* The Intel SPEC CTRL MSR base value cache */
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 608bbe6..9926509 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -59,7 +59,6 @@ DEFINE_PER_CPU(u64, x86_spec_ctrl_current);
EXPORT_PER_CPU_SYMBOL_GPL(x86_spec_ctrl_current);
u64 x86_pred_cmd __ro_after_init = PRED_CMD_IBPB;
-EXPORT_SYMBOL_GPL(x86_pred_cmd);
static u64 __ro_after_init x86_arch_cap_msr;