Re: [PATCH v2 00/11] Fix up SRSO stuff
From: Josh Poimboeuf
Date: Mon Aug 14 2023 - 15:52:33 EST
On Mon, Aug 14, 2023 at 06:44:47PM +0200, Borislav Petkov wrote:
> On Mon, Aug 14, 2023 at 01:44:26PM +0200, Peter Zijlstra wrote:
> > The one open techinical issue I have with the mitigation is the alignment of
> > the RET inside srso_safe_ret(). The details given for retbleed stated that RET
> > should be on a 64byte boundary, which is not the case here.
>
> I have written this in the hope to make this more clear:
>
> /*
> * Some generic notes on the untraining sequences:
> *
> * They are interchangeable when it comes to flushing potentially wrong
> * RET predictions from the BTB.
> *
> * The SRSO Zen1/2 (MOVABS) untraining sequence is longer than the
> * Retbleed sequence because the return sequence done there
> * (srso_safe_ret()) is longer and the return sequence must fully nest
> * (end before) the untraining sequence. Therefore, the untraining
> * sequence must overlap the return sequence.
> *
> * Regarding alignment - the instructions which need to be untrained,
> * must all start at a cacheline boundary for Zen1/2 generations. That
> * is, both the ret in zen_untrain_ret() and srso_safe_ret() in the
> * srso_untrain_ret() must both be placed at the beginning of
> * a cacheline.
> */
It's a good comment, but RET in srso_safe_ret() is still misaligned.
Don't we need something like so?
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index 9bc19deacad1..373ac128a30a 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -251,13 +251,14 @@ __EXPORT_THUNK(retbleed_untrain_ret)
* thus a "safe" one to use.
*/
.align 64
- .skip 64 - (srso_safe_ret - srso_untrain_ret), 0xcc
+ .skip 64 - (.Lsrso_ret - srso_untrain_ret), 0xcc
SYM_START(srso_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
ANNOTATE_NOENDBR
.byte 0x48, 0xb8
SYM_INNER_LABEL(srso_safe_ret, SYM_L_GLOBAL)
lea 8(%_ASM_SP), %_ASM_SP
+.Lsrso_ret:
ret
int3
int3