[PATCH 3/3] locking/osq: Drop the overload of osq_lock()
From: Pan Xinhui
Date: Mon Jun 27 2016 - 09:43:25 EST
An over-committed guest with more vCPUs than pCPUs has a heavy overload
in osq_lock().
This is because vCPU A hold the osq lock and yield out, vCPU B wait
per_cpu node->locked to be set. IOW, vCPU B wait vCPU A to run and
unlock the osq lock. Such spinning is meaningless.
So lets use vcpu_is_preempted() to detect if we need stop the spinning
test case:
perf record -a perf bench sched messaging -g 400 -p && perf report
before patch:
18.09% sched-messaging [kernel.vmlinux] [k] osq_lock
12.28% sched-messaging [kernel.vmlinux] [k] rwsem_spin_on_owner
5.27% sched-messaging [kernel.vmlinux] [k] mutex_unlock
3.89% sched-messaging [kernel.vmlinux] [k] wait_consider_task
3.64% sched-messaging [kernel.vmlinux] [k] _raw_write_lock_irq
3.41% sched-messaging [kernel.vmlinux] [k] mutex_spin_on_owner.is
2.49% sched-messaging [kernel.vmlinux] [k] system_call
after patch:
20.68% sched-messaging [kernel.vmlinux] [k] mutex_spin_on_owner
8.45% sched-messaging [kernel.vmlinux] [k] mutex_unlock
4.12% sched-messaging [kernel.vmlinux] [k] system_call
3.01% sched-messaging [kernel.vmlinux] [k] system_call_common
2.83% sched-messaging [kernel.vmlinux] [k] copypage_power7
2.64% sched-messaging [kernel.vmlinux] [k] rwsem_spin_on_owner
2.00% sched-messaging [kernel.vmlinux] [k] osq_lock
Signed-off-by: Pan Xinhui <xinhui.pan@xxxxxxxxxxxxxxxxxx>
---
kernel/locking/osq_lock.c | 16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c
index 05a3785..9e86f0b 100644
--- a/kernel/locking/osq_lock.c
+++ b/kernel/locking/osq_lock.c
@@ -21,6 +21,11 @@ static inline int encode_cpu(int cpu_nr)
return cpu_nr + 1;
}
+static inline int node_cpu(struct optimistic_spin_node *node)
+{
+ return node->cpu - 1;
+}
+
static inline struct optimistic_spin_node *decode_cpu(int encoded_cpu_val)
{
int cpu_nr = encoded_cpu_val - 1;
@@ -118,8 +123,17 @@ bool osq_lock(struct optimistic_spin_queue *lock)
while (!READ_ONCE(node->locked)) {
/*
* If we need to reschedule bail... so we can block.
+ * An over-committed guest with more vCPUs than pCPUs
+ * might fall in this loop and cause a huge overload.
+ * This is because vCPU A(prev) hold the osq lock and yield out
+ * vCPU B(node) wait ->locked to be set, IOW, it wait utill
+ * vCPU A run and unlock the osq lock. Such spin is meaningless
+ * use vcpu_is_preempted to detech such case. IF arch does not
+ * support vcpu preempted check, vcpu_is_preempted is a macro
+ * defined by false.
*/
- if (need_resched())
+ if (need_resched() ||
+ vcpu_is_preempted(node_cpu(node->prev)))
goto unqueue;
cpu_relax_lowlatency();
--
2.4.11