Re: [PATCH v10 20/39] KVM: nVMX: hyper-v: Enable L2 TLB flush
From: Sean Christopherson
Date: Thu Sep 22 2022 - 12:06:04 EST
On Wed, Sep 21, 2022, Vitaly Kuznetsov wrote:
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 0634518a6719..1451a7a2c488 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -1132,6 +1132,17 @@ static void nested_vmx_transition_tlb_flush(struct kvm_vcpu *vcpu,
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
>
> + /*
> + * KVM_REQ_HV_TLB_FLUSH flushes entries from either L1's VP_ID or
> + * L2's VP_ID upon request from the guest. Make sure we check for
> + * pending entries for the case when the request got misplaced (e.g.
Kind of a nit, but I'd prefer to avoid "misplaced", as that implies KVM puts entries
into the wrong FIFO. The issue isn't that KVM puts entries in the wrong FIFO,
it's that the FIFO is filled asynchronously be other vCPUs and so it's possible
to switch to a FIFO that has valid entries without a pending request.
And thinking about this, KVM_REQ_HV_TLB_FLUSH shouldn't be handled in
kvm_service_local_tlb_flush_requests(). My initial reaction to this patch is that
queueing the request here is too late because the switch has already happened,
i.e. nVMX has already called kvm_service_local_tlb_flush_requests() and so the
request
But making the request for the _new_ context is correct _and_ necessary, e.g. given
vCPU0 vCPU1
FIFO[L1].insert
FIFO[L1].insert
L1 => L2 transition
FIFO[L1].insert
FIFO[L1].insert
KVM_REQ_HV_TLB_FLUSH
if nVMX made the request for the old contex, then this would happen
vCPU0 vCPU1
FIFO[L1].insert
FIFO[L1].insert
KVM_REQ_HV_TLB_FLUSH
service FIFO[L1]
L1 => L2 transition
FIFO[L1].insert
FIFO[L1].insert
KVM_REQ_HV_TLB_FLUSH
service FIFO[L2]
...
KVM_REQ_HV_TLB_FLUSH
service FIFO[L2]
L2 => L1 transition
Run L1 with FIFO[L1] entries!!!
whereas what is being done in this patch is:
vCPU0 vCPU1
FIFO[L1].insert
FIFO[L1].insert
L1 => L2 transition
KVM_REQ_HV_TLB_FLUSH
service FIFO[2]
FIFO[L1].insert
FIFO[L1].insert
KVM_REQ_HV_TLB_FLUSH
service FIFO[L2]
...
L2 => L1 transition
KVM_REQ_HV_TLB_FLUSH
service FIFO[L1]
which is correct and ensures that KVM will always consume FIFO entries prior to
running the associated context.
In other words, unlike KVM_REQ_TLB_FLUSH_CURRENT and KVM_REQ_TLB_FLUSH_GUEST,
KVM_REQ_HV_TLB_FLUSH is not a "local" request. It's much more like KVM_REQ_TLB_FLUSH
in that it can come from other vCPUs, i.e. is effectively a "remote" request.
So rather than handle KVM_REQ_TLB_FLUSH in the "local" path, it should be handled
only in the request path. Handling the request in kvm_service_local_tlb_flush_requests()
won't break anything, but conceptually it's wrong and as a result it's misleading
because it implies that nested transitions could also be handled by forcing
kvm_service_local_tlb_flush_requests() to service flushes for the current, i.e.
previous, context on nested transitions, but that wouldn't work (see example above).
I.e. we should end up with something like this:
/*
* Note, the order matters here, as flushing "all" TLB entries
* also flushes the "current" TLB entries, and flushing "guest"
* TLB entries is a superset of Hyper-V's fine-grained flushing.
* I.e. servicing the flush "all" will clear any request to
* flush "current", and flushing "guest" will clear any request
* to service Hyper-V's fine-grained flush.
*/
if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu))
kvm_vcpu_flush_tlb_all(vcpu);
kvm_service_local_tlb_flush_requests(vcpu);
/*
* Fall back to a "full" guest flush if Hyper-V's precise
* flushing fails. Note, Hyper-V's flushing is per-vCPU, but
* the flushes are considered "remote" and not "local" because
* the requests can be initiated from other vCPUs.
*/
if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu) &&
kvm_hv_vcpu_flush_tlb(vcpu))
kvm_vcpu_flush_tlb_guest(vcpu);
> + * a transition from L2->L1 happened while processing L2 TLB flush
> + * request or vice versa). kvm_hv_vcpu_flush_tlb() will not flush
> + * anything if there are no requests in the corresponding buffer.
> + */
> + if (to_hv_vcpu(vcpu))
This should be:
if (to_hv_vcpu(vcpu) && enable_ept)
otherwise KVM will fall back to flushing the guest, which is the entire TLB, when
EPT is disabled. I'm guessing this applies to SVM+NPT as well.
> + kvm_make_request(KVM_REQ_HV_TLB_FLUSH, vcpu);