On Fri, 2015-04-24 at 13:54 -0400, Waiman Long wrote:
This patch also checks one more time in __rwsem_do_wake() to see ifIt strikes me that this should be another patch, as the optimization is
the rwsem was stolen just before doing the expensive wakeup operation
which will be unnecessary if the lock was stolen.
independent of the wake_lock (comments below).
[...]
+#ifdef CONFIG_RWSEM_SPIN_ON_OWNERCould you please reuse the CONFIG_RWSEM_SPIN_ON_OWNER ifdefiry we
already have? Just add these where we define rwsem_spin_on_owner().
[...]
/*We currently allow small races between rwsem owner and counter checks.
* handle the lock release when processes blocked on it that can now run
* - if we come here from up_xxxx(), then:
@@ -125,6 +154,14 @@ __rwsem_do_wake(struct rw_semaphore *sem, enum rwsem_wake_type wake_type)
struct list_head *next;
long oldcount, woken, loop, adjustment;
+ /*
+ * up_write() cleared the owner field before calling this function.
+ * If that field is now set, a writer must have stolen the lock and
+ * the wakeup operation should be aborted.
+ */
+ if (rwsem_has_active_writer(sem))
+ goto out;
And __rwsem_do_wake() can be called by checking the former -- and lock
stealing is done with the counter as well. Please see below how we back
out of such cases, as it is very much considered when granting the next
reader. So nack to this as is, sorry.