Re: [PATCH] KVM: VMX: Fix vmx->nested freeing when no SMI handler

From: Liran Alon
Date: Wed Nov 22 2017 - 04:09:10 EST




On 22/11/17 10:45, Liran Alon wrote:


On 22/11/17 09:56, Wanpeng Li wrote:
From: Wanpeng Li <wanpeng.li@xxxxxxxxxxx>

Reported by syzkaller:

------------[ cut here ]------------
WARNING: CPU: 5 PID: 2939 at arch/x86/kvm/vmx.c:3844
free_loaded_vmcs+0x77/0x80 [kvm_intel]
CPU: 5 PID: 2939 Comm: repro Not tainted 4.14.0+ #26
RIP: 0010:free_loaded_vmcs+0x77/0x80 [kvm_intel]
Call Trace:
vmx_free_vcpu+0xda/0x130 [kvm_intel]
kvm_arch_destroy_vm+0x192/0x290 [kvm]
kvm_put_kvm+0x262/0x560 [kvm]
kvm_vm_release+0x2c/0x30 [kvm]
__fput+0x190/0x370
task_work_run+0xa1/0xd0
do_exit+0x4d2/0x13e0
do_group_exit+0x89/0x140
get_signal+0x318/0xb80
do_signal+0x8c/0xb40
exit_to_usermode_loop+0xe4/0x140
syscall_return_slowpath+0x206/0x230
entry_SYSCALL_64_fastpath+0x98/0x9a

The syzkaller testcase will execute VMXON/VMLAUCH instructions, so the
vmx->nested stuff is populated, it will also issue KVM_SMI ioctl.
However,
the testcase is just a simple c program and not be lauched by something
like seabios which implements smi_handler. Commit 05cade71cf (KVM: nSVM:
fix SMI injection in guest mode) gets out of guest mode and set
nested.vmxon
to false for the duration of SMM according to SDM 34.14.1 "leave VMX
operation" upon entering SMM. We can't alloc/free the vmx->nested stuff
each time when entering/exiting SMM since it will induce more
overhead. So
the function vmx_pre_enter_smm() marks nested.vmxon false even if
vmx->nested
stuff is still populated. What it expected is em_rsm() can mark
nested.vmxon
to be true again. However, the smi_handler/rsm will not execute since
there
is no something like seabios in this scenario. The function free_nested()
fails to free the vmx->nested stuff since the vmx->nested.vmxon is false
which results in the above warning.

This patch fixes it by also considering the no SMI handler case, luckily
vmx->nested.smm.vmxon is marked according to the value of
vmx->nested.vmxon
in vmx_pre_enter_smm(), we can take advantage of it and free vmx->nested
stuff when L1 goes down.

Reported-by: Dmitry Vyukov <dvyukov@xxxxxxxxxx>
Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx>
Cc: Radim KrÄmÃÅ <rkrcmar@xxxxxxxxxx>
Cc: Dmitry Vyukov <dvyukov@xxxxxxxxxx>
Fixes: 05cade71cf (KVM: nSVM: fix SMI injection in guest mode)
Signed-off-by: Wanpeng Li <wanpeng.li@xxxxxxxxxxx>
---
arch/x86/kvm/vmx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index dccc0f7..ed22425 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -7372,7 +7372,7 @@ static inline void nested_release_vmcs12(struct
vcpu_vmx *vmx)
*/
static void free_nested(struct vcpu_vmx *vmx)
{
- if (!vmx->nested.vmxon)
+ if (!vmx->nested.vmxon && !vmx->nested.smm.vmxon)
return;

vmx->nested.vmxon = false;

Funny bug. Great analysis.
Reviewed-by: Liran Alon <liran.alon@xxxxxxxxxx>
Actually, I would add one more thing to patch:
I think we should also set "vmx->nested.smm.vmxon = false;" after "vmx->nested.vmxon = false;" to correctlyhandle the case VMXOFF is executed from SMI handler. Otherwise, when SMI handler executes RSM, we will reach vmx_pre_leave_smm() which will set again "vmx->nested.vmxon = true;" which I think shouldn't happen.

Regards,
-Liran