On Sun, 2014-08-03 at 22:36 -0400, Waiman Long wrote:
Even thought only the writers can perform optimistic spinning, thereRight, now I understand where you were coming from in patch 3/7 ;)
is still a chance that readers may take the lock before a spinning
writer can get it. In that case, the owner field will be NULL and the
spinning writer can spin indefinitely until its time quantum expires
when some lock owning readers are not running.
This patch tries to handle this special case by:^^setting
1) setting the owner field to a special value RWSEM_READ_OWNED
to indicate that the current or last owner is a reader.
2) seting a threshold on how many times (currently 100) spinning will
be done with active readers before giving up as there is no easyThat's exactly why these kind of magic things aren't a good thing, much
way to determine if all of them are currently running.
By doing so, it tries to strike a balance between giving up too early
and losing potential performance gain and wasting too many precious
CPU cycles when some lock owning readers are not running.
less in locking. And other alternatives are much more involved, creating
more overhead, which can make the whole thing pretty much useless.
Nor does the amount of times trying to spin strike me as the correct
metric to determine such things. Instead something y cycles or time
based.
[...]
#ifdef CONFIG_RWSEM_SPIN_ON_OWNERLooks rather weird...
+/*
+ * The owner field is set to RWSEM_READ_OWNED if the last owner(s) are
+ * readers. It is not reset until a writer takes over and set it to its
+ * task structure pointer or NULL when it frees the lock. So a value
+ * of RWSEM_READ_OWNED doesn't mean it currently has active readers.
+ */
+#define RWSEM_READ_OWNED ((struct task_struct *)-1)
#define __RWSEM_OPT_INIT(lockname) , .osq = OSQ_LOCK_UNLOCKED, .owner = NULLI dislike this for the same reasons they weren't welcomed in spinlocks.
#else
#define __RWSEM_OPT_INIT(lockname)
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 9f71a67..576d4cd 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -304,6 +304,11 @@ static inline bool rwsem_try_write_lock(long count, struct rw_semaphore *sem)
#ifdef CONFIG_RWSEM_SPIN_ON_OWNER
/*
+ * Threshold for optimistic spinning on readers
+ */
+#define RWSEM_READ_SPIN_THRESHOLD 100
We don't know how it can impact workloads that have not been tested.
[...]
static bool rwsem_optimistic_spin(struct rw_semaphore *sem)This is still a pretty fast-path and is going to affect writers, so we
{
struct task_struct *owner;
bool taken = false;
+ int read_spincnt = 0;
preempt_disable();
@@ -397,8 +409,12 @@ static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
while (true) {
owner = ACCESS_ONCE(sem->owner);
- if (owner&& !rwsem_spin_on_owner(sem, owner))
+ if (owner == RWSEM_READ_OWNED) {
+ if (++read_spincnt> RWSEM_READ_SPIN_THRESHOLD)
+ break;
really want to keep it un-clobbered.
Thanks,
Davidlohr