Re: [PATCH] Revert "locking/pvqspinlock: Don't wait if vCPU is preempted"

From: Paolo Bonzini
Date: Mon Sep 09 2019 - 07:06:13 EST


On 09/09/19 12:56, Waiman Long wrote:
> On 9/9/19 2:40 AM, Wanpeng Li wrote:
>> From: Wanpeng Li <wanpengli@xxxxxxxxxxx>
>>
>> This patch reverts commit 75437bb304b20 (locking/pvqspinlock: Don't wait if
>> vCPU is preempted), we found great regression caused by this commit.
>>
>> Xeon Skylake box, 2 sockets, 40 cores, 80 threads, three VMs, each is 80 vCPUs.
>> The score of ebizzy -M can reduce from 13000-14000 records/s to 1700-1800
>> records/s with this commit.
>>
>> Host Guest score
>>
>> vanilla + w/o kvm optimizes vanilla 1700-1800 records/s
>> vanilla + w/o kvm optimizes vanilla + revert 13000-14000 records/s
>> vanilla + w/ kvm optimizes vanilla 4500-5000 records/s
>> vanilla + w/ kvm optimizes vanilla + revert 14000-15500 records/s
>>
>> Exit from aggressive wait-early mechanism can result in yield premature and
>> incur extra scheduling latency in over-subscribe scenario.
>>
>> kvm optimizes:
>> [1] commit d73eb57b80b (KVM: Boost vCPUs that are delivering interrupts)
>> [2] commit 266e85a5ec9 (KVM: X86: Boost queue head vCPU to mitigate lock waiter preemption)
>>
>> Tested-by: loobinliu@xxxxxxxxxxx
>> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
>> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
>> Cc: Ingo Molnar <mingo@xxxxxxxxxx>
>> Cc: Waiman Long <longman@xxxxxxxxxx>
>> Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx>
>> Cc: Radim KrÄmÃÅ <rkrcmar@xxxxxxxxxx>
>> Cc: loobinliu@xxxxxxxxxxx
>> Cc: stable@xxxxxxxxxxxxxxx
>> Fixes: 75437bb304b20 (locking/pvqspinlock: Don't wait if vCPU is preempted)
>> Signed-off-by: Wanpeng Li <wanpengli@xxxxxxxxxxx>
>> ---
>> kernel/locking/qspinlock_paravirt.h | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qspinlock_paravirt.h
>> index 89bab07..e84d21a 100644
>> --- a/kernel/locking/qspinlock_paravirt.h
>> +++ b/kernel/locking/qspinlock_paravirt.h
>> @@ -269,7 +269,7 @@ pv_wait_early(struct pv_node *prev, int loop)
>> if ((loop & PV_PREV_CHECK_MASK) != 0)
>> return false;
>>
>> - return READ_ONCE(prev->state) != vcpu_running || vcpu_is_preempted(prev->cpu);
>> + return READ_ONCE(prev->state) != vcpu_running;
>> }
>>
>> /*
>
> There are several possibilities for this performance regression:
>
> 1) Multiple vcpus calling vcpu_is_preempted() repeatedly may cause some
> cacheline contention issue depending on how that callback is implemented.

Unlikely, it is a single percpu read.

> 2) KVM may set the preempt flag for a short period whenver an vmexit
> happens even if a vmenter is executed shortly after. In this case, we
> may want to use a more durable vcpu suspend flag that indicates the vcpu
> won't get a real vcpu back for a longer period of time.

It sets it for exits to userspace, but they shouldn't really happen on a
properly-configured system.

However, it's easy to test this theory:

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 2e302e977dac..feb6c75a7a88 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3368,26 +3368,28 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
{
int idx;

- if (vcpu->preempted)
+ if (vcpu->preempted) {
vcpu->arch.preempted_in_kernel = !kvm_x86_ops->get_cpl(vcpu);

- /*
- * Disable page faults because we're in atomic context here.
- * kvm_write_guest_offset_cached() would call might_fault()
- * that relies on pagefault_disable() to tell if there's a
- * bug. NOTE: the write to guest memory may not go through if
- * during postcopy live migration or if there's heavy guest
- * paging.
- */
- pagefault_disable();
- /*
- * kvm_memslots() will be called by
- * kvm_write_guest_offset_cached() so take the srcu lock.
- */
- idx = srcu_read_lock(&vcpu->kvm->srcu);
- kvm_steal_time_set_preempted(vcpu);
- srcu_read_unlock(&vcpu->kvm->srcu, idx);
- pagefault_enable();
+ /*
+ * Disable page faults because we're in atomic context here.
+ * kvm_write_guest_offset_cached() would call might_fault()
+ * that relies on pagefault_disable() to tell if there's a
+ * bug. NOTE: the write to guest memory may not go through if
+ * during postcopy live migration or if there's heavy guest
+ * paging.
+ */
+ pagefault_disable();
+ /*
+ * kvm_memslots() will be called by
+ * kvm_write_guest_offset_cached() so take the srcu lock.
+ */
+ idx = srcu_read_lock(&vcpu->kvm->srcu);
+ kvm_steal_time_set_preempted(vcpu);
+ srcu_read_unlock(&vcpu->kvm->srcu, idx);
+ pagefault_enable();
+ }
+
kvm_x86_ops->vcpu_put(vcpu);
vcpu->arch.last_host_tsc = rdtsc();
/*

Wanpeng, can you try?

Paolo

> Perhaps you can add a lock event counter to count the number of
> wait_early events caused by vcpu_is_preempted() being true to see if it
> really cause a lot more wait_early than without the vcpu_is_preempted()
> call.
>
> I have no objection to this, I just want to find out the root cause of it.
>
> Cheers,
> Longman
>