Re: [PATCH v2 2/3] kvm, vmx: move register clearing out of assembly path

From: Sean Christopherson
Date: Mon Oct 29 2018 - 13:26:10 EST


On Mon, Oct 29, 2018 at 04:40:43PM +0100, Julian Stecklina wrote:
> Split the security related register clearing out of the large inline
> assembly VM entry path. This results in two slightly less complicated
> inline assembly statements, where it is clearer what each one does.
>
> Signed-off-by: Julian Stecklina <jsteckli@xxxxxxxxx>
> Reviewed-by: Jan H. Schönherr <jschoenh@xxxxxxxxx>
> Reviewed-by: Konrad Jan Miller <kjm@xxxxxxxxx>
> Reviewed-by: Jim Mattson <jmattson@xxxxxxxxxx>
> ---
> arch/x86/kvm/vmx.c | 45 ++++++++++++++++++++++++++++-----------------
> 1 file changed, 28 insertions(+), 17 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index a6e5a5c..29a2ee7 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -11281,24 +11281,7 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
> "mov %%r13, %c[r13](%0) \n\t"
> "mov %%r14, %c[r14](%0) \n\t"
> "mov %%r15, %c[r15](%0) \n\t"
> - /*
> - * Clear host registers marked as clobbered to prevent
> - * speculative use.
> - */
> - "xor %%r8d, %%r8d \n\t"
> - "xor %%r9d, %%r9d \n\t"
> - "xor %%r10d, %%r10d \n\t"
> - "xor %%r11d, %%r11d \n\t"
> - "xor %%r12d, %%r12d \n\t"
> - "xor %%r13d, %%r13d \n\t"
> - "xor %%r14d, %%r14d \n\t"
> - "xor %%r15d, %%r15d \n\t"
> #endif
> -
> - "xor %%eax, %%eax \n\t"
> - "xor %%ebx, %%ebx \n\t"
> - "xor %%esi, %%esi \n\t"
> - "xor %%edi, %%edi \n\t"
> "pop %%" _ASM_BP "; pop %%" _ASM_DX " \n\t"
> ".pushsection .rodata \n\t"
> ".global vmx_return \n\t"
> @@ -11336,6 +11319,34 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
> );
>
> /*
> + * Don't let guest register values survive. Registers that cannot
> + * contain guest values anymore are not touched.

I think it's a good idea to explicitly call out that clearing the GPRs
is done to prevent speculative use. Simply stating that we don't want
to let guest register values survive doesn't explain *why*.

What about:

/*
* Explicitly clear (in addition to marking them as clobbered) all GPRs
* that have not been loaded with host state to prevent speculatively
* using the guest's values.
*/

> + */
> + asm volatile (
> + "xor %%eax, %%eax \n\t"
> + "xor %%ebx, %%ebx \n\t"
> + "xor %%esi, %%esi \n\t"
> + "xor %%edi, %%edi \n\t"
> +#ifdef CONFIG_X86_64
> + "xor %%r8d, %%r8d \n\t"
> + "xor %%r9d, %%r9d \n\t"
> + "xor %%r10d, %%r10d \n\t"
> + "xor %%r11d, %%r11d \n\t"
> + "xor %%r12d, %%r12d \n\t"
> + "xor %%r13d, %%r13d \n\t"
> + "xor %%r14d, %%r14d \n\t"
> + "xor %%r15d, %%r15d \n\t"
> +#endif
> + ::: "cc"
> +#ifdef CONFIG_X86_64
> + , "rax", "rbx", "rsi", "rdi"
> + , "r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15"
> +#else
> + , "eax", "ebx", "esi", "edi"
> +#endif
> + );
> +
> + /*
> * We do not use IBRS in the kernel. If this vCPU has used the
> * SPEC_CTRL MSR it may have left it on; save the value and
> * turn it off. This is much more efficient than blindly adding
> --
> 2.7.4
>