Re: [PATCH 5/6] KVM: x86/mmu: Protect kvm->memslots with a mutex
From: Sean Christopherson
Date: Wed Apr 28 2021 - 21:42:18 EST
On Thu, Apr 29, 2021, Sean Christopherson wrote:
> On Thu, Apr 29, 2021, Paolo Bonzini wrote:
> > it's not ugly and it's still relatively easy to explain.
>
> LOL, that's debatable.
>
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index 2799c6660cce..48929dd5fb29 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -1377,16 +1374,17 @@ static int kvm_set_memslot(struct kvm *kvm,
> > goto out_slots;
> > update_memslots(slots, new, change);
> > - slots = install_new_memslots(kvm, as_id, slots);
> > + install_new_memslots(kvm, as_id, slots);
> > kvm_arch_commit_memory_region(kvm, mem, old, new, change);
> > -
> > - kvfree(slots);
> > return 0;
> > out_slots:
> > - if (change == KVM_MR_DELETE || change == KVM_MR_MOVE)
> > + if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) {
> > + slot = id_to_memslot(slots, old->id);
> > + slot->flags &= ~KVM_MEMSLOT_INVALID;
>
> Modifying flags on an SRCU-protect field outside of said protection is sketchy.
> It's probably ok to do this prior to the generation update, emphasis on
> "probably". Of course, the VM is also likely about to be killed in this case...
>
> > slots = install_new_memslots(kvm, as_id, slots);
>
> This will explode if memory allocation for KVM_MR_MOVE fails. In that case,
> the rmaps for "slots" will have been cleared by kvm_alloc_memslot_metadata().
Gah, that's all wrong, slots are the second duplicate and the clear happens on
the new slot, not the old slot with the same id.
Though I still think temporarily dropping the SRCU lock would be simpler. If
performance is a concern, it could be mitigated by adding a capability to
preallocate the rmaps.
> > + }
> > kvfree(slots);
> > return r;
> > }
>
> The SRCU index is already tracked in vcpu->srcu_idx, why not temporarily drop
> the SRCU lock if activate_shadow_mmu() needs to do work so that it can take
> slots_lock? That seems simpler and I think would avoid modifying the common
> memslot code.
>
> kvm_arch_async_page_ready() is the only path for reaching kvm_mmu_reload() that
> looks scary, but that should be impossible to reach with the correct MMU context.
> We could always and an explicit sanity check on the rmaps being avaiable.