Re: [PATCH 4/5] futex: Avoid taking hb lock if nothing to wakeup

From: Waiman Long
Date: Fri Nov 22 2013 - 23:06:28 EST


On 11/22/2013 08:25 PM, Linus Torvalds wrote:
On Fri, Nov 22, 2013 at 4:56 PM, Davidlohr Bueso<davidlohr@xxxxxx> wrote:
In futex_wake() there is clearly no point in taking the hb->lock if
we know beforehand that there are no tasks to be woken. This comes
at the smaller cost of doing some atomic operations to keep track of
the list's size.
Hmm. Why? Afaik, you only care about "empty or not". And if you don't
need the serialization from locking, then afaik you can just do a
"plist_head_empty()" without holding the lock.

NOTE!

The "list_empty()" function is very much designed to work even without
holding a lock (as long as the head itself exists reliably, of course)
BUT you have to then guarantee yourself that your algorithm doesn't
have any races wrt other CPU's adding an entry to the list at the same
time. Not holding a lock obviously means that you are not serialized
against that.. We've had problems with people doing

if (!list_empty(waiters))
wake_up_list(..)

because they wouldn't wake people up who just got added.

But considering that your atomic counter checking has the same lack of
serialization, at least the plist_head_empty() check shouldn't be any
worse than that counter thing.. And doesn't need any steenking atomic
ops or a new counter field.

Linus

Before the patch, a waker will wake up one or waiters who call spin_lock() before the waker. If we just use plist_head_empty(), we will miss waiters who have called spin_lock(), but have not yet got the lock or inserted itself into the wait list. By adding atomic counter for checking purpose, we again has a single point where any waiters who pass that point (inc atomic counter) can be woken up by a waiter who check the atomic counter after they inc it. This acts sort of like serialization without using a lock.

-Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/