[PATCH v3 0/2] x86/kvm: Reduce vcpu_is_preempted() overhead
From: Waiman Long
Date: Wed Feb 15 2017 - 13:32:25 EST
v2->v3:
- Provide an optimized __raw_callee_save___kvm_vcpu_is_preempted()
in assembly as suggested by PeterZ.
- Add a new patch to change vcpu_is_preempted() argument type to long
to ease the writing of the assembly code.
v1->v2:
- Rerun the fio test on a different system on both bare-metal and a
KVM guest. Both sockets were utilized in this test.
- The commit log was updated with new performance numbers, but the
patch wasn't changed.
- Drop patch 2.
As it was found that the overhead of callee-save vcpu_is_preempted()
can have some impact on system performance on a VM guest, especially
of x86-64 guest, this patch set intends to reduce this performance
overhead by replacing the C __kvm_vcpu_is_preempted() function by
an optimized version of __raw_callee_save___kvm_vcpu_is_preempted()
written in assembly.
Waiman Long (2):
x86/paravirt: Change vcp_is_preempted() arg type to long
x86/kvm: Provide optimized version of vcpu_is_preempted() for x86-64
arch/x86/include/asm/paravirt.h | 2 +-
arch/x86/include/asm/qspinlock.h | 2 +-
arch/x86/kernel/kvm.c | 30 +++++++++++++++++++++++++++++-
arch/x86/kernel/paravirt-spinlocks.c | 2 +-
4 files changed, 32 insertions(+), 4 deletions(-)
--
1.8.3.1