On Wed, Jun 15, 2016 at 09:56:59AM -0700, Davidlohr Bueso wrote:
On Tue, 14 Jun 2016, Waiman Long wrote:I think he'll go rely on it later on.
+++ b/kernel/locking/osq_lock.cHmm this being a polling path, that barrier can get pretty expensive and
@@ -115,7 +115,7 @@ bool osq_lock(struct optimistic_spin_queue *lock)
* cmpxchg in an attempt to undo our queueing.
*/
- while (!READ_ONCE(node->locked)) {
+ while (!smp_load_acquire(&node->locked)) {
last I checked it was unnecessary:
In any case, its fairly simple to cure, just add
smp_acquire__after_ctrl_dep() at the end. If we bail because
need_resched() we don't need the acquire I think.