ltykernel@xxxxxxxxx writes:
From: Lan Tianyu <Tianyu.Lan@xxxxxxxxxxxxx>
This patch is to initialize ept_pointer to INVALID_PAGE and check it
before flushing ept tlb. If ept_pointer is invalidated, bypass the flush
request.
Signed-off-by: Lan Tianyu <Tianyu.Lan@xxxxxxxxxxxxx>
---
arch/x86/kvm/vmx.c | 16 +++++++++++++---
1 file changed, 13 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 4555077d69ce..edbc96cb990a 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -1580,14 +1580,22 @@ static int vmx_hv_remote_flush_tlb(struct kvm *kvm)
/*
* FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE hypercall needs the address of the
* base of EPT PML4 table, strip off EPT configuration information.
+ * If ept_pointer is invalid pointer, bypass the flush request.
*/
if (to_kvm_vmx(kvm)->ept_pointers_match != EPT_POINTERS_MATCH) {
- kvm_for_each_vcpu(i, vcpu, kvm)
+ kvm_for_each_vcpu(i, vcpu, kvm) {
+ if (!VALID_PAGE(to_vmx(vcpu)->ept_pointer))
+ return 0;
+
To be honest I fail to understand the reason behind the patch: instead
of doing one unneeded flush request with ept_pointer==0 (after vCPU is
initialized) we now do the check every time. Could you please elaborate
on why this is needed?
ret |= hyperv_flush_guest_mapping(
- to_vmx(kvm_get_vcpu(kvm, i))->ept_pointer & PAGE_MASK);
+ to_vmx(vcpu)->ept_pointer & PAGE_MASK);
I would use a local variable for 'to_vmx(vcpu)->ept_pointer' or even
'to_vmx(vcpu)->ept_pointer & PAGE_MASK' and use it in VALID_PAGE() - as
lower bits are unrelated;
+ }
} else {
+ if (!VALID_PAGE(to_vmx(kvm_get_vcpu(kvm, 0))->ept_pointer))
+ return 0;
Ditto.
+
ret = hyperv_flush_guest_mapping(
- to_vmx(kvm_get_vcpu(kvm, 0))->ept_pointer & PAGE_MASK);
+ to_vmx(kvm_get_vcpu(kvm, 0))->ept_pointer & PAGE_MASK);
This doesn't belong to this patch.
}
spin_unlock(&to_kvm_vmx(kvm)->ept_pointer_lock);
@@ -11568,6 +11576,8 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
vmx->pi_desc.nv = POSTED_INTR_VECTOR;
vmx->pi_desc.sn = 1;
+ vmx->ept_pointer = INVALID_PAGE;
+
return &vmx->vcpu;
free_vmcs: