Re: [PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest

From: Waiman Long
Date: Mon May 19 2014 - 16:30:35 EST


On 05/08/2014 03:12 PM, Peter Zijlstra wrote:
On Wed, May 07, 2014 at 11:01:38AM -0400, Waiman Long wrote:


No, we want the unfair thing for VIRT, not PARAVIRT.


Yes, you are right. I will change that to VIRT.

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 9e7659e..10e87e1 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -227,6 +227,14 @@ static __always_inline int get_qlock(struct qspinlock *lock)
{
struct __qspinlock *l = (void *)lock;

+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+ if (static_key_false(&paravirt_unfairlocks_enabled))
+ /*
+ * Need to use atomic operation to get the lock when
+ * lock stealing can happen.
+ */
+ return cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0;
That's missing {}.

It is a single statement which doesn't need braces according to kernel coding style. I could move the comments up a bit to make it easier to read.

+#endif

barrier();
ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
barrier();

But no, what you want is:

static __always_inline bool virt_lock(struct qspinlock *lock)
{
#ifdef CONFIG_VIRT_MUCK
if (static_key_false(&virt_unfairlocks_enabled)) {
while (!queue_spin_trylock(lock))
cpu_relax();

return true;
}
#else
return false;
}


void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
{
if (virt_lock(lock))
return;

...
}

This is a possible way of doing it. I can do that in the patch series to simplify it. Hopefully that will speed up the review process and get it done quicker.

-Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/