Re: [PATCH v2] xen/spinlock: Don't use pvqspinlock if only 1 vCPU

From: Davidlohr Bueso
Date: Mon Jul 23 2018 - 00:50:43 EST


On Sun, 22 Jul 2018, Davidlohr Bueso wrote:

On Mon, 23 Jul 2018, Wanpeng Li wrote:

On Fri, 20 Jul 2018 at 06:03, Waiman Long <longman@xxxxxxxxxx> wrote:

On 07/19/2018 05:54 PM, Davidlohr Bueso wrote:
On Thu, 19 Jul 2018, Waiman Long wrote:

On a VM with only 1 vCPU, the locking fast paths will always be
successful. In this case, there is no need to use the the PV qspinlock
code which has higher overhead on the unlock side than the native
qspinlock code.

The xen_pvspin veriable is also turned off in this 1 vCPU case to

s/veriable
variable

eliminate unneeded pvqspinlock initialization in xen_init_lock_cpu()
which is run after xen_init_spinlocks().

Wouldn't kvm also want this?

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index a37bda38d205..95aceb692010 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -457,7 +457,8 @@ static void __init sev_map_percpu_data(void)
static void __init kvm_smp_prepare_cpus(unsigned int max_cpus)
{
native_smp_prepare_cpus(max_cpus);
- if (kvm_para_has_hint(KVM_HINTS_REALTIME))
+ if (num_possible_cpus() == 1 ||
+ kvm_para_has_hint(KVM_HINTS_REALTIME))
static_branch_disable(&virt_spin_lock_key);
}

That doesn't really matter as the slowpath will never get executed in
the 1 vCPU case.

How does this differ then from xen, then? I mean, same principle applies.


So this is not needed in kvm tree?
https://git.kernel.org/pub/scm/virt/kvm/kvm.git/commit/?h=queue&id=3a792199004ec335346cc607d62600a399a7ee02

Hmm I would think that my patch would be more appropiate as it actually does
what the comment says.

Both would be needed actually yes, but also disabling the virt_spin_lock_key
would be more robust imo.

Thanks,
Davidlohr