Re: [PATCH] KVM: SVM: Use 'unsigned long' for the physical address passed to VMSAVE

From: Sean Christopherson
Date: Tue Feb 02 2021 - 17:38:18 EST


On Tue, Feb 02, 2021, Sean Christopherson wrote:
> Take an 'unsigned long' instead of 'hpa_t' in the recently added vmsave()
> helper, as loading a 64-bit GPR isn't possible in 32-bit mode. This is
> properly reflected in the SVM ISA, which explicitly states that VMSAVE,
> VMLOAD, VMRUN, etc... consume rAX based on the effective address size.
>
> Don't bother with a WARN to detect breakage on 32-bit KVM, the VMCB PA is
> stored as an 'unsigned long', i.e. the bad address is long since gone.
> Not to mention that a 32-bit kernel is completely hosed if alloc_page()
> hands out pages in high memory.
>
> Reported-by: kernel test robot <lkp@xxxxxxxxx>
> Cc: Robert Hu <robert.hu@xxxxxxxxx>
> Cc: Farrah Chen <farrah.chen@xxxxxxxxx>
> Cc: Danmei Wei <danmei.wei@xxxxxxxxx>
> Cc: Tom Lendacky <Thomas.Lendacky@xxxxxxx>
> Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx>

Forgot got the Fixes tag. Or just squash this.

Fixes: f84a54c04540 ("KVM: SVM: Use asm goto to handle unexpected #UD on SVM instructions")

> ---
> arch/x86/kvm/svm/svm_ops.h | 7 ++++++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/svm/svm_ops.h b/arch/x86/kvm/svm/svm_ops.h
> index 0c8377aee52c..9f007bc8409a 100644
> --- a/arch/x86/kvm/svm/svm_ops.h
> +++ b/arch/x86/kvm/svm/svm_ops.h
> @@ -51,7 +51,12 @@ static inline void invlpga(unsigned long addr, u32 asid)
> svm_asm2(invlpga, "c"(asid), "a"(addr));
> }
>
> -static inline void vmsave(hpa_t pa)
> +/*
> + * Despite being a physical address, the portion of rAX that is consumed by
> + * VMSAVE, VMLOAD, etc... is still controlled by the effective address size,
> + * hence 'unsigned long' instead of 'hpa_t'.
> + */
> +static inline void vmsave(unsigned long pa)
> {
> svm_asm1(vmsave, "a" (pa), "memory");
> }
> --
> 2.30.0.365.g02bc693789-goog
>