Re: [PATCH 1/2 V4] KVM, SEV: Add support for SEV intra host migration

From: Marc Orr
Date: Fri Aug 20 2021 - 02:36:05 EST


On Thu, Aug 19, 2021 at 3:58 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
>
> On Thu, Aug 19, 2021, Peter Gonda wrote:
> > > >
> > > > +static int svm_sev_lock_for_migration(struct kvm *kvm)
> > > > +{
> > > > + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> > > > + int ret;
> > > > +
> > > > + /*
> > > > + * Bail if this VM is already involved in a migration to avoid deadlock
> > > > + * between two VMs trying to migrate to/from each other.
> > > > + */
> > > > + spin_lock(&sev->migration_lock);
> > > > + if (sev->migration_in_progress)
> > > > + ret = -EBUSY;
> > > > + else {
> > > > + /*
> > > > + * Otherwise indicate VM is migrating and take the KVM lock.
> > > > + */
> > > > + sev->migration_in_progress = true;
> > > > + mutex_lock(&kvm->lock);
>
> Deadlock aside, mutex_lock() can sleep, which is not allowed while holding a
> spinlock, i.e. this patch does not work. That's my suggestion did the crazy
> dance of "acquiring" a flag.
>
> What I don't know is why on earth I suggested a global spinlock, a simple atomic
> should work, e.g.
>
> if (atomic_cmpxchg_acquire(&sev->migration_in_progress, 0, 1))
> return -EBUSY;
>
> mutex_lock(&kvm->lock);
>
> and on the backend...
>
> mutex_unlock(&kvm->lock);
>
> atomic_set_release(&sev->migration_in_progress, 0);
>
> > > > + ret = 0;
> > > > + }
> > > > + spin_unlock(&sev->migration_lock);
> > > > +
> > > > + return ret;
> > > > +}
> > > > +
> > > > +static void svm_unlock_after_migration(struct kvm *kvm)
> > > > +{
> > > > + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> > > > +
> > > > + mutex_unlock(&kvm->lock);
> > > > + WRITE_ONCE(sev->migration_in_progress, false);
> > > > +}
> > > > +
> > >
> > > This entire locking scheme seems over-complicated to me. Can we simply
> > > rely on `migration_lock` and get rid of `migration_in_progress`? I was
> > > chatting about these patches with Peter, while he worked on this new
> > > version. But he mentioned that this locking scheme had been suggested
> > > by Sean in a previous review. Sean: what do you think? My rationale
> > > was that this is called via a VM-level ioctl. So serializing the
> > > entire code path on `migration_lock` seems fine. But maybe I'm missing
> > > something?
> >
> >
> > Marc I think that only having the spin lock could result in
> > deadlocking. If userspace double migrated 2 VMs, A and B for
> > discussion, A could grab VM_A.spin_lock then VM_A.kvm_mutex. Meanwhile
> > B could grab VM_B.spin_lock and VM_B.kvm_mutex. Then A attempts to
> > grab VM_B.spin_lock and we have a deadlock. If the same happens with
> > the proposed scheme when A attempts to lock B, VM_B.spin_lock will be
> > open but the bool will mark the VM under migration so A will unlock
> > and bail. Sean originally proposed a global spin lock but I thought a
> > per kvm_sev_info struct would also be safe.
>
> Close. The issue is taking kvm->lock from both VM_A and VM_B. If userspace
> double migrates we'll end up with lock ordering A->B and B-A, so we need a way
> to guarantee one of those wins. My proposed solution is to use a flag as a sort
> of one-off "try lock" to detect a mean userspace.

Got it now. Thanks to you both, for the explanation. By the way, just
to make sure I completely follow, I assume that if a "double
migration" occurs, then user space is mis-behaving -- correct? But
presumably, we need to reason about how to respond to such
mis-behavior so that buggy or malicious user-space code cannot stumble
over/exploit this scenario?