Re: [PATCH] KVM: SEV: Mark nested locking of vcpu->lock

From: Sean Christopherson
Date: Mon Apr 04 2022 - 17:38:36 EST


On Mon, Apr 04, 2022, Peter Gonda wrote:
> svm_vm_migrate_from() uses sev_lock_vcpus_for_migration() to lock all
> source and target vcpu->locks. Mark the nested subclasses to avoid false
> positives from lockdep.
>
> Fixes: b56639318bb2b ("KVM: SEV: Add support for SEV intra host migration")
> Reported-by: John Sperbeck<jsperbeck@xxxxxxxxxx>
> Suggested-by: David Rientjes <rientjes@xxxxxxxxxx>
> Signed-off-by: Peter Gonda <pgonda@xxxxxxxxxx>
> Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx>
> Cc: Sean Christopherson <seanjc@xxxxxxxxxx>
> Cc: kvm@xxxxxxxxxxxxxxx
> Cc: linux-kernel@xxxxxxxxxxxxxxx
> ---
>
> Tested by running sev_migrate_tests with lockdep enabled. Before we see
> a warning from sev_lock_vcpus_for_migration(). After we get no warnings.
>
> ---
> arch/x86/kvm/svm/sev.c | 12 +++++++-----
> 1 file changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 75fa6dd268f0..8f77421c1c4b 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -1591,15 +1591,16 @@ static void sev_unlock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm)
> atomic_set_release(&src_sev->migration_in_progress, 0);
> }
>
> -
> -static int sev_lock_vcpus_for_migration(struct kvm *kvm)
> +static int sev_lock_vcpus_for_migration(struct kvm *kvm, unsigned int *subclass)
> {
> struct kvm_vcpu *vcpu;
> unsigned long i, j;
>
> kvm_for_each_vcpu(i, vcpu, kvm) {
> - if (mutex_lock_killable(&vcpu->mutex))
> + if (mutex_lock_killable_nested(&vcpu->mutex, *subclass))
> goto out_unlock;
> +
> + ++(*subclass);

This is rather gross, and I'm guessing it adds extra work for the non-lockdep
case, assuming the compiler isn't so clever that it can figure out that the result
is never used. Not that this is a hot path...

Does each lock actually need a separate subclass? If so, why don't the other
paths that lock all vCPUs complain?

If differentiating the two VMs is sufficient, then we can pass in SINGLE_DEPTH_NESTING
for the second round of locks. If a per-vCPU subclass is required, we can use the
vCPU index and assign evens to one and odds to the other, e.g. this should work and
compiles to a nop when LOCKDEP is disabled (compile tested only). It's still gross,
but we could pretty it up, e.g. add defines for the 0/1 param.

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 75fa6dd268f0..9be35902b809 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1591,14 +1591,13 @@ static void sev_unlock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm)
atomic_set_release(&src_sev->migration_in_progress, 0);
}

-
-static int sev_lock_vcpus_for_migration(struct kvm *kvm)
+static int sev_lock_vcpus_for_migration(struct kvm *kvm, int mod)
{
struct kvm_vcpu *vcpu;
unsigned long i, j;

kvm_for_each_vcpu(i, vcpu, kvm) {
- if (mutex_lock_killable(&vcpu->mutex))
+ if (mutex_lock_killable_nested(&vcpu->mutex, i * 2 + mod))
goto out_unlock;
}

@@ -1745,10 +1744,10 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd)
charged = true;
}

- ret = sev_lock_vcpus_for_migration(kvm);
+ ret = sev_lock_vcpus_for_migration(kvm, 0);
if (ret)
goto out_dst_cgroup;
- ret = sev_lock_vcpus_for_migration(source_kvm);
+ ret = sev_lock_vcpus_for_migration(source_kvm, 1);
if (ret)
goto out_dst_vcpu;