Re: [PATCH tip/locking/core v9 4/6] locking/pvqspinlock: Collect slowpath lock statistics

From: Waiman Long
Date: Thu Nov 05 2015 - 11:29:38 EST

On 11/02/2015 11:40 AM, Peter Zijlstra wrote:
On Fri, Oct 30, 2015 at 07:26:35PM -0400, Waiman Long wrote:
This patch enables the accumulation of kicking and waiting related
PV qspinlock statistics when the new QUEUED_LOCK_STAT configuration
option is selected. It also enables the collection of data which
enable us to calculate the kicking and wakeup latencies which have
a heavy dependency on the CPUs being used.

The statistical counters are per-cpu variables to minimize the
performance overhead in their updates. These counters are exported
via the sysfs filesystem under the /sys/kernel/qlockstat directory.
When the corresponding sysfs files are read, summation and computing
of the required data are then performed.
Why did you switch to sysfs? You can create custom debugfs files too.

I was not aware of that capability. So you mean using debugfs_create_file() using custom file_operations. Right? That doesn't seem to be easier than using sysfs. However, I can use that if you think it is better to use debugfs.

@@ -259,7 +275,7 @@ static void pv_wait_head(struct qspinlock *lock, struct mcs_spinlock *node)
if (READ_ONCE(pn->state) == vcpu_hashed)
lp = (struct qspinlock **)1;

- for (;;) {
+ for (;; waitcnt++) {
for (loop = SPIN_THRESHOLD; loop; loop--) {
if (!READ_ONCE(l->locked))
Did you check that goes away when !STAT ?

Yes, the increment code goes away when !STAT. I had added a comment to talk about that.

+ * Return the average kick latency (ns) = pv_latency_kick/pv_kick_unlock
+ */
+static ssize_t
+kick_latency_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
+ int cpu;
+ u64 latencies = 0, kicks = 0;
+ for_each_online_cpu(cpu) {
I think you need for_each_possible_cpu(), otherwise the results will
change with hotplug operations.

Right, I will make the change.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at