Re: [tip:x86/pti] x86/retpoline: Fill return stack buffer on vmexit
From: David Woodhouse
Date: Thu Jan 25 2018 - 07:21:21 EST
On Thu, 2018-01-25 at 13:07 +0100, Borislav Petkov wrote:
> On Fri, Jan 12, 2018 at 03:37:49AM -0800, tip-bot for David Woodhouse wrote:
> >
> > +/*
> > + * On VMEXIT we must ensure that no RSB predictions learned in the guest
> > + * can be followed in the host, by overwriting the RSB completely. Both
> > + * retpoline and IBRS mitigations for Spectre v2 need this; only on future
> > + * CPUs with IBRS_ATT *might* it be avoided.
> > + */
> > +static inline void vmexit_fill_RSB(void)
> > +{
> > +#ifdef CONFIG_RETPOLINE
> > + unsigned long loops = RSB_CLEAR_LOOPS / 2;
> > +
> > + asm volatile (ANNOTATE_NOSPEC_ALTERNATIVE
> > + ÂÂÂÂÂÂALTERNATIVE("jmp 910f",
> > + ÂÂ__stringify(__FILL_RETURN_BUFFER(%0, RSB_CLEAR_LOOPS, %1)),
> > + ÂÂX86_FEATURE_RETPOLINE)
> > + ÂÂÂÂÂÂ"910:"
> > + ÂÂÂÂÂÂ: "=&r" (loops), ASM_CALL_CONSTRAINT
> > + ÂÂÂÂÂÂ: "r" (loops) : "memory" );
> > +#endif
> > +}
> Btw,
>
> this thing is a real pain to backport to older kernels without breaking
> kABI (I know, I know, latter sucks but it is what it is)
I haven't had lunch yet, so I don't feel queasy and I'm vaguely
interested... *why* does it break kABI?
> so I'm thinking we could simplify that thing regardless.
>
> As it is, 41 bytes get replaced currently:
>
> [ÂÂÂÂ0.437005] apply_alternatives: feat: 7*32+12, old: (ÂÂÂÂÂÂÂÂ(ptrval), len: 43), repl: (ÂÂÂÂÂÂÂÂ(ptrval), len: 43), pad: 41
> [ÂÂÂÂ0.438885]ÂÂÂÂÂÂÂÂÂ(ptrval): old_insn: eb 29 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90
> [ÂÂÂÂ0.440001]ÂÂÂÂÂÂÂÂÂ(ptrval): rpl_insn: 48 c7 c0 10 00 00 00 e8 07 00 00 00 f3 90 0f ae e8 eb f9 e8 07 00 00 00 f3 90 0f ae e8 eb f9 48 ff c8 75 e3 48 81 c4 00 01 00 00
> [ÂÂÂÂ0.444002]ÂÂÂÂÂÂÂÂÂ(ptrval): final_insn: 48 c7 c0 10 00 00 00 e8 07 00 00 00 f3 90 0f ae e8 eb f9 e8 07 00 00 00 f3 90 0f ae e8 eb f9 48 ff c8 75 e3 48 81 c4 00 01 00 00
>
> Not that it doesn't work but the less bytes we replace, the better.
>
> So it could be made to simply call two functions. The replacing then
> turns into:
>
> [ÂÂÂÂ0.438154] apply_alternatives: feat: 7*32+12, old: (ffffffff81060b60 len: 5), repl: (ffffffff82434692, len: 5), pad: 0
> [ÂÂÂÂ0.440002] ffffffff81060b60: old_insn: e8 ab 73 01 00
> [ÂÂÂÂ0.441003] ffffffff82434692: rpl_insn: e8 89 38 c4 fe
> [ÂÂÂÂ0.441966] apply_alternatives: Fix CALL offset: 0x173bb, CALL 0xffffffff81077f20
> [ÂÂÂÂ0.444002] ffffffff81060b60: final_insn: e8 bb 73 01 00
>
> The "old_insn" above is:
> ffffffff81060b60:ÂÂÂÂÂÂÂe8 ab 73 01
> 00ÂÂÂÂÂÂÂÂÂÂcallqÂÂffffffff81077f10 <__fill_rsb_nop>
>
> and final_insn is
> ffffffff82434692:ÂÂÂÂÂÂÂe8 89 38 c4
> feÂÂÂÂÂÂÂÂÂÂcallqÂÂffffffff81077f20 <__fill_rsb>
>
> so both CALLs with immediates for which there's no speculation going
> on.
>
> I had to add a new alternative_void_call() macro for that but that's
> straight-forward. Also, this causes objtool to segfault so Josh and I
> need to look at that first.
>
> Other than that, it gets a lot simpler this way:
>
> --
> diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h
> index cf5961ca8677..a84863c1b7d3 100644
> --- a/arch/x86/include/asm/alternative.h
> +++ b/arch/x86/include/asm/alternative.h
> @@ -210,6 +210,11 @@ static inline int alternatives_text_reserved(void *start, void *end)
> Â asm volatile (ALTERNATIVE("call %P[old]", "call %P[new]", feature) \
> Â : output : [old] "i" (oldfunc), [new] "i" (newfunc), ## input)
> Â
> +/* Like alternative_io, but for replacing a direct call with another one. */
> +#define alternative_void_call(oldfunc, newfunc, feature, input...) \
> + asm volatile (ALTERNATIVE("call %P[old]", "call %P[new]", feature) \
> + : : [old] "i" (oldfunc), [new] "i" (newfunc), ## input)
But you aren't doing the call at all in the other case, and
alternatives *always* handled the case where the first 'alternative'
instruction was a branch, didn't it?
So couldn't it just be alternative(nop, call __fill_rsb_func)?
But I still don't understand why it matters.
> +void __fill_rsb(void)
> +{
> + unsigned long loops;
> +
> + asm volatile (__stringify(__FILL_RETURN_BUFFER(%0, RSB_CLEAR_LOOPS, %1))
> + ÂÂÂÂÂÂ: "=r" (loops), ASM_CALL_CONSTRAINT
> + ÂÂÂÂÂÂ: : "memory" );
> +}
> +#endif
The out-of-line function should be __clear_rsb() if it's using
RSB_CLEAR_LOOPS, and __fill_rsb() if it's using RSB_FILL_LOOPS. I think
we're only using one right now but Andi at least is posting patches
which use the other, as part of the Skylake clusterfuck.Attachment:
smime.p7s
Description: S/MIME cryptographic signature