Re: [PATCH -v5][RFC]: mutex: implement adaptive spinning

From: Peter Zijlstra
Date: Wed Jan 07 2009 - 17:19:31 EST


On Wed, 2009-01-07 at 13:58 -0800, Linus Torvalds wrote:
>
> On Wed, 7 Jan 2009, Peter Zijlstra wrote:
> >
> > Do we really have to re-do all that code every loop?
>
> No, you're right, we can just look up the cpu once. Which makes Andrew's
> argument that "probe_kernel_address()" isn't in any hot path even more
> true.
>
> > Also, it would still need to do the funny:
> >
> > l_owner = ACCESS_ONCE(lock->owner)
> > if (l_owner && l_owner != thread)
> > break;
>
> Why? That would fall out of the
>
> if (lock->owner != thread)
> break;
>
> part. We don't actually care that it only happens once: this all has
> _known_ races, and the "cpu_relax()" is a barrier.
>
> And notice how the _caller_ handles the "owner == NULL" case by not even
> calling this, and looping over just the state in the lock itself. That was
> in the earlier emails. So this approach is actually pretty different from
> the case that depended on the whole spinlock thing.

Ah, so now you do loop on !owner, previuosly you insisted we'd go to
sleep on !owner. Yes, with !owner spinning that is indeed not needed.

> +#ifdef CONFIG_SMP
> + /* Optimistic spinning.. */
> + for (;;) {
> + struct thread_struct *owner;
> + int oldval = atomic_read(&lock->count);
> +
> + if (oldval <= MUTEX_SLEEPERS)
> + break;
> + if (oldval == 1) {
> + oldval = atomic_cmpxchg(&lock->count, oldval, 0);
> + if (oldval == 1) {
> + lock->owner = task_thread_info(task);
> + return 0;
> + }
> + } else {
> + /* See who owns it, and spin on him if anybody */
> + owner = lock->owner;
> + if (owner)
> + spin_on_owner(lock, owner);
> + }
> + cpu_relax();
> + }
> +#endif

Hmm, still wouldn't the spin_on_owner() loopyness and the above need
that need_resched() check you mentioned to that it can fall into the
slow path and go to sleep?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/