2016-08-09 17:32+0800, Yang Zhang:
On 2016/8/9 2:16, Radim KrÄmÃÅ wrote:
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
@@ -6995,16 +6982,21 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
return 1;
}
+ if (cpu_has_vmx_msr_bitmap()) {
+ vmx->nested.msr_bitmap =
+ (unsigned long *)__get_free_page(GFP_KERNEL);
+ if (!vmx->nested.msr_bitmap)
+ goto out_msr_bitmap;
+ }
+
We export msr_bitmap to guest even it is not supported by hardware. So we
need to allocate msr_bitmap for L1 unconditionally.
We do emulate the feature, but the vmx->nested.msr_bitmap is used only
if VMX supports it to avoid some VM exits:
@@ -9957,10 +9938,10 @@ static void prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
}
if (cpu_has_vmx_msr_bitmap() &&
- exec_control & CPU_BASED_USE_MSR_BITMAPS) {
- nested_vmx_merge_msr_bitmap(vcpu, vmcs12);
- /* MSR_BITMAP will be set by following vmx_set_efer. */
- } else
+ exec_control & CPU_BASED_USE_MSR_BITMAPS &&
+ nested_vmx_merge_msr_bitmap(vcpu, vmcs12))
+ ; /* MSR_BITMAP will be set by following vmx_set_efer. */
+ else
exec_control &= ~CPU_BASED_USE_MSR_BITMAPS;
The else branch is taken if !cpu_has_vmx_msr_bitmap() and disables msr
bitmaps. Similar check for vmx_set_msr_bitmap(), so the NULL doesn't
even get written to VMCS.
KVM always uses L1's msr bitmaps when emulating the feature.