Re: [PATCH 4/5] futex: Avoid taking hb lock if nothing to wakeup

From: Davidlohr Bueso
Date: Mon Nov 25 2013 - 13:55:33 EST


On Mon, 2013-11-25 at 18:32 +0100, Thomas Gleixner wrote:
> On Mon, 25 Nov 2013, Peter Zijlstra wrote:
> > On Mon, Nov 25, 2013 at 05:23:51PM +0100, Thomas Gleixner wrote:
> > > On Sat, 23 Nov 2013, Linus Torvalds wrote:
> > >
> > > > On Sat, Nov 23, 2013 at 5:16 AM, Thomas Gleixner <tglx@xxxxxxxxxxxxx> wrote:
> > > > >
> > > > > Now the question is why we queue the waiter _AFTER_ reading the user
> > > > > space value. The comment in the code is pretty non sensical:
> > > > >
> > > > > * On the other hand, we insert q and release the hash-bucket only
> > > > > * after testing *uaddr. This guarantees that futex_wait() will NOT
> > > > > * absorb a wakeup if *uaddr does not match the desired values
> > > > > * while the syscall executes.
> > > > >
> > > > > There is no reason why we cannot queue _BEFORE_ reading the user space
> > > > > value. We just have to dequeue in all the error handling cases, but
> > > > > for the fast path it does not matter at all.
> > > > >
> > > > > CPU 0 CPU 1
> > > > >
> > > > > val = *futex;
> > > > > futex_wait(futex, val);
> > > > >
> > > > > spin_lock(&hb->lock);
> > > > >
> > > > > plist_add(hb, self);
> > > > > smp_wmb();
> > > > >
> > > > > uval = *futex;
> > > > > *futex = newval;
> > > > > futex_wake();
> > > > >
> > > > > smp_rmb();
> > > > > if (plist_empty(hb))
> > > > > return;
> > > > > ...
> > > >
> > > > This would seem to be a nicer approach indeed, without needing the
> > > > extra atomics.
> > >
> > > I went through the issue with Peter and he noticed, that we need
> > > smp_mb() in both places. That's what we have right now with the
> > > spin_lock() and it is required as we need to guarantee that
> > >
> > > The waiter observes the change to the uaddr value after it added
> > > itself to the plist
> > >
> > > The waker observes plist not empty if the change to uaddr was made
> > > after the waiter checked the value.
> > >
> > >
> > > write(plist) | write(futex_uaddr)
> > > mb() | mb()
> > > read(futex_uaddr) | read(plist)
> > >
> > > The spin_lock mb() on the waiter side does not help here because it
> > > happpens before the write(plist) and not after it.
> >
> > Ah, note that spin_lock() is only a smp_mb() on x86, in general its an
> > ACQUIRE barrier which is weaker than a full mb and will not suffice in
> > this case even it if were in the right place.
>
> So now the question is whether this lockless empty check optimization
> which seems to be quite nice on x86 for a particular workload will
> have any negative side effects on other architectures.
>
> If the smp_mb() is heavy weight, then it will hurt massivly in the
> case where the hash bucket is not empty, because we add the price for
> the smp_mb() just for no gain.
>
> In that context it would also be helpful to measure the overhead on
> x86 for the !empty case.

Absolutely, I will add these comparisons. If we do notice that we end up
hurting the !empty case, would the current patch using atomic ops still
be considered? We have made sure that none of the changes in this set
affects performance on other workloads/smaller systems.

Thanks,
Davidlohr

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/