[tip:locking/core] x86/kvm: Support the vCPU preemption check

From: tip-bot for Pan Xinhui
Date: Tue Nov 22 2016 - 07:35:00 EST


Commit-ID: 0b9f6c4615c993d2b552e0d2bd1ade49b56e5beb
Gitweb: http://git.kernel.org/tip/0b9f6c4615c993d2b552e0d2bd1ade49b56e5beb
Author: Pan Xinhui <xinhui.pan@xxxxxxxxxxxxxxxxxx>
AuthorDate: Wed, 2 Nov 2016 05:08:35 -0400
Committer: Ingo Molnar <mingo@xxxxxxxxxx>
CommitDate: Tue, 22 Nov 2016 12:48:08 +0100

x86/kvm: Support the vCPU preemption check

Support the vcpu_is_preempted() functionality under KVM. This will
enhance lock performance on overcommitted hosts (more runnable vCPUs
than physical CPUs in the system) as doing busy waits for preempted
vCPUs will hurt system performance far worse than early yielding.

Use struct kvm_steal_time::preempted to indicate that if a vCPU
is running or not.

Signed-off-by: Pan Xinhui <xinhui.pan@xxxxxxxxxxxxxxxxxx>
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Acked-by: Paolo Bonzini <pbonzini@xxxxxxxxxx>
Cc: David.Laight@xxxxxxxxxx
Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: benh@xxxxxxxxxxxxxxxxxxx
Cc: boqun.feng@xxxxxxxxx
Cc: borntraeger@xxxxxxxxxx
Cc: bsingharora@xxxxxxxxx
Cc: dave@xxxxxxxxxxxx
Cc: jgross@xxxxxxxx
Cc: kernellwp@xxxxxxxxx
Cc: konrad.wilk@xxxxxxxxxx
Cc: linuxppc-dev@xxxxxxxxxxxxxxxx
Cc: mpe@xxxxxxxxxxxxxx
Cc: paulmck@xxxxxxxxxxxxxxxxxx
Cc: paulus@xxxxxxxxx
Cc: rkrcmar@xxxxxxxxxx
Cc: virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
Cc: will.deacon@xxxxxxx
Cc: xen-devel-request@xxxxxxxxxxxxxxxxxxxx
Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx
Link: http://lkml.kernel.org/r/1478077718-37424-9-git-send-email-xinhui.pan@xxxxxxxxxxxxxxxxxx
[ Typo fixes. ]
Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
---
arch/x86/include/uapi/asm/kvm_para.h | 4 +++-
arch/x86/kvm/x86.c | 16 ++++++++++++++++
2 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
index 94dc8ca..1421a65 100644
--- a/arch/x86/include/uapi/asm/kvm_para.h
+++ b/arch/x86/include/uapi/asm/kvm_para.h
@@ -45,7 +45,9 @@ struct kvm_steal_time {
__u64 steal;
__u32 version;
__u32 flags;
- __u32 pad[12];
+ __u8 preempted;
+ __u8 u8_pad[3];
+ __u32 pad[11];
};

#define KVM_STEAL_ALIGNMENT_BITS 5
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 04c5d96..59c2d6f 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2071,6 +2071,8 @@ static void record_steal_time(struct kvm_vcpu *vcpu)
&vcpu->arch.st.steal, sizeof(struct kvm_steal_time))))
return;

+ vcpu->arch.st.steal.preempted = 0;
+
if (vcpu->arch.st.steal.version & 1)
vcpu->arch.st.steal.version += 1; /* first time write, random junk */

@@ -2826,8 +2828,22 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
kvm_make_request(KVM_REQ_STEAL_UPDATE, vcpu);
}

+static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu)
+{
+ if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
+ return;
+
+ vcpu->arch.st.steal.preempted = 1;
+
+ kvm_write_guest_offset_cached(vcpu->kvm, &vcpu->arch.st.stime,
+ &vcpu->arch.st.steal.preempted,
+ offsetof(struct kvm_steal_time, preempted),
+ sizeof(vcpu->arch.st.steal.preempted));
+}
+
void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
{
+ kvm_steal_time_set_preempted(vcpu);
kvm_x86_ops->vcpu_put(vcpu);
kvm_put_guest_fpu(vcpu);
vcpu->arch.last_host_tsc = rdtsc();