Re: [PATCH 4/4] locking: Introduce smp_cond_acquire()
From: Paul E. McKenney
Date: Mon Nov 16 2015 - 11:44:40 EST
On Mon, Nov 16, 2015 at 04:24:53PM +0000, Will Deacon wrote:
> On Mon, Nov 16, 2015 at 05:04:45PM +0100, Peter Zijlstra wrote:
> > On Mon, Nov 16, 2015 at 04:56:58PM +0100, Peter Zijlstra wrote:
> > > On Thu, Nov 12, 2015 at 10:21:39AM -0800, Linus Torvalds wrote:
> > > > Now, the point of spin_unlock_wait() (and "spin_is_locked()") should
> > > > generally be that you have some external ordering guarantee that
> > > > guarantees that the lock has been taken. For example, for the IPC
> > > > semaphores, we do either one of:
> > > >
> > > > (a) get large lock, then - once you hold that lock - wait for each small lock
> > > >
> > > > or
> > > >
> > > > (b) get small lock, then - once you hold that lock - check that the
> > > > largo lock is unlocked
> > > >
> > > > and that's the case we should really worry about. The other uses of
> > > > spin_unlock_wait() should have similar "I have other reasons to know
> > > > I've seen that the lock was taken, or will never be taken after this
> > > > because XYZ".
> > >
> > > I don't think this is true for the usage in do_exit(), we have no
> > > knowledge on if pi_lock is taken or not. We just want to make sure that
> > > _if_ it were taken, we wait until it is released.
> > And unless PPC would move to using RCsc locks with a SYNC in
> > spin_lock(), I don't think it makes sense to add
> > smp_mb__after_unlock_lock() to all tsk->pi_lock instances to fix this.
> > As that is far more expensive than flipping the exit path to do
> > spin_lock()+spin_unlock().
> ... or we upgrade spin_unlock_wait to a LOCK operation, which might be
> slightly cheaper than spin_lock()+spin_unlock().
Or we supply a heavyweight version of spin_unlock_wait() that forces
the cache miss. But I bet that the difference in overhead between
spin_lock()+spin_unlock() and the heavyweight version would be down in
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/