Re: [PATCH] KVM: SVM: Add register operand to vmsave call in sev_es_vcpu_load

From: Sean Christopherson
Date: Mon Dec 21 2020 - 13:19:58 EST


On Fri, Dec 18, 2020, Nathan Chancellor wrote:
> When using LLVM's integrated assembler (LLVM_IAS=1) while building
> x86_64_defconfig + CONFIG_KVM=y + CONFIG_KVM_AMD=y, the following build
> error occurs:
>
> $ make LLVM=1 LLVM_IAS=1 arch/x86/kvm/svm/sev.o
> arch/x86/kvm/svm/sev.c:2004:15: error: too few operands for instruction
> asm volatile(__ex("vmsave") : : "a" (__sme_page_pa(sd->save_area)) : "memory");
> ^
> arch/x86/kvm/svm/sev.c:28:17: note: expanded from macro '__ex'
> #define __ex(x) __kvm_handle_fault_on_reboot(x)
> ^
> ./arch/x86/include/asm/kvm_host.h:1646:10: note: expanded from macro '__kvm_handle_fault_on_reboot'
> "666: \n\t" \
> ^
> <inline asm>:2:2: note: instantiated into assembly here
> vmsave
> ^
> 1 error generated.
>
> This happens because LLVM currently does not support calling vmsave
> without the fixed register operand (%rax for 64-bit and %eax for
> 32-bit). This will be fixed in LLVM 12 but the kernel currently supports
> LLVM 10.0.1 and newer so this needs to be handled.
>
> Add the proper register using the _ASM_AX macro, which matches the
> vmsave call in vmenter.S.

There are also two instances in tools/testing/selftests/kvm/lib/x86_64/svm.c
that likely need to be fixed.

> Fixes: 861377730aa9 ("KVM: SVM: Provide support for SEV-ES vCPU loading")
> Link: https://reviews.llvm.org/D93524
> Link: https://github.com/ClangBuiltLinux/linux/issues/1216
> Signed-off-by: Nathan Chancellor <natechancellor@xxxxxxxxx>
> ---
> arch/x86/kvm/svm/sev.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index e57847ff8bd2..958370758ed0 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -2001,7 +2001,7 @@ void sev_es_vcpu_load(struct vcpu_svm *svm, int cpu)
> * of which one step is to perform a VMLOAD. Since hardware does not
> * perform a VMSAVE on VMRUN, the host savearea must be updated.
> */
> - asm volatile(__ex("vmsave") : : "a" (__sme_page_pa(sd->save_area)) : "memory");
> + asm volatile(__ex("vmsave %%"_ASM_AX) : : "a" (__sme_page_pa(sd->save_area)) : "memory");

I vote to add a helper in svm.h to encode VMSAVE, even if there is only the one
user. Between the rAX behavior (it _must_ be rAX) and taking the HPA of the
VMCB, the semantics of VMSAVE are just odd enough to cause a bit of head
scratching when reading the code for the first time. E.g. something like:

void vmsave(struct page *vmcb)
{
/*
* VMSAVE takes the HPA of a VMCB in rAX (hardcoded by VMSAVE itself).
* The _ASM_AX operand is required to specify the address size, which
* means VMSAVE cannot consume a 64-bit address outside of 64-bit mode.
*/
hpa_t vmcb_pa = __sme_page_pa(vmcb);

BUG_ON(!IS_ENABLED(CONFIG_X86_64) && (vmcb_pa >> 32));

asm volatile(__ex("vmsave %%"_ASM_AX) : : "a" (vmcb_pa) : "memory");
}

>
> /*
> * Certain MSRs are restored on VMEXIT, only save ones that aren't
> --
> 2.30.0.rc0
>