Re: [PATCH 4.19 STABLE 2/2] KVM: VMX: Mark RCX, RDX and RSI as clobbered in vmx_vcpu_run()'s asm blob
From: Sean Christopherson
Date: Tue May 05 2020 - 02:27:34 EST
On Tue, May 05, 2020 at 08:15:02AM +0200, Greg Kroah-Hartman wrote:
> On Mon, May 04, 2020 at 06:23:48PM -0700, Sean Christopherson wrote:
> > Save RCX, RDX and RSI to fake outputs to coerce the compiler into
> > treating them as clobbered. RCX in particular is likely to be reused by
> > the compiler to dereference the 'struct vcpu_vmx' pointer, which will
> > result in a null pointer dereference now that RCX is zeroed by the asm
> > blob.
> >
> > Add ASM_CALL_CONSTRAINT to fudge around an issue where <something>
> > during modpost can't find vmx_return when specifying output constraints.
> >
> > Reported-by: Tobias Urdin <tobias.urdin@xxxxxxxxxx>
> > Fixes: b4be98039a92 ("KVM: VMX: Zero out *all* general purpose registers after VM-Exit")
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@xxxxxxxxx>
> > ---
> > arch/x86/kvm/vmx.c | 3 ++-
> > 1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> > index 5b06a98ffd4c..54c8b4dc750d 100644
> > --- a/arch/x86/kvm/vmx.c
> > +++ b/arch/x86/kvm/vmx.c
> > @@ -10882,7 +10882,8 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
> > ".global vmx_return \n\t"
> > "vmx_return: " _ASM_PTR " 2b \n\t"
> > ".popsection"
> > - : : "c"(vmx), "d"((unsigned long)HOST_RSP), "S"(evmcs_rsp),
> > + : ASM_CALL_CONSTRAINT, "=c"((int){0}), "=d"((int){0}), "=S"((int){0})
> > + : "c"(vmx), "d"((unsigned long)HOST_RSP), "S"(evmcs_rsp),
> > [launched]"i"(offsetof(struct vcpu_vmx, __launched)),
> > [fail]"i"(offsetof(struct vcpu_vmx, fail)),
> > [host_rsp]"i"(offsetof(struct vcpu_vmx, host_rsp)),
> > --
> > 2.26.0
> >
>
> What is the git commit id of this patch in Linus's tree?
There is none. In upstream at the time of the offending commit (b4be98039a92
in 4.19, 0e0ab73c9a024 upstream), the inline asm blob had previously been
moved to a dedicated helper, __vmx_vcpu_run(), that was intentionally put
into a separate compilation unit, i.e. consuming the clobbered register
was effectively impossible because %rcx is volatile and __vmx_vcpu_run()
couldn't itself be inlined.
To make things more confusing, the inline asm blob got moved into a proper
asm subroutine shortly thereafter. Things really start to diverge from
current upstream right around the time of this commit.