Re: [PATCH v5 tip/core/locking 5/7]Documentation/memory-barriers.txt: Downgrade UNLOCK+LOCK

From: Paul E. McKenney
Date: Tue Dec 10 2013 - 12:15:37 EST


On Tue, Dec 10, 2013 at 05:44:37PM +0100, Oleg Nesterov wrote:
> On 12/09, Paul E. McKenney wrote:
> >
> > @@ -1626,7 +1626,10 @@ for each construct. These operations all imply certain barriers:
> > operation has completed.
> >
> > Memory operations issued before the LOCK may be completed after the LOCK
> > - operation has completed.
> > + operation has completed. An smp_mb__before_spinlock(), combined
> > + with a following LOCK, acts as an smp_wmb(). Note the "w",
> > + this is smp_wmb(), not smp_mb().
>
> Well, but smp_mb__before_spinlock + LOCK is not wmb... But it is not
> the full barrier. It should guarantee that, say,
>
> CONDITION = true; // 1
>
> // try_to_wake_up
> smp_mb__before_spinlock();
> spin_lock(&task->pi_lock);
>
> if (!(p->state & state)) // 2
> return;
>
> can't race with with set_current_state() + check(CONDITION), this means
> that 1 and 2 above must not be reordered.
>
> But a LOAD before before spin_lock() can leak into the critical section.
>
> Perhaps this should be clarified somehow, or perhaps it should actually
> imply mb (if combined with LOCK).

If we leave the implementation the same, does the following capture the
constraints?

Memory operations issued before the LOCK may be completed after
the LOCK operation has completed. An smp_mb__before_spinlock(),
combined with a following LOCK, orders prior loads against
subsequent stores and stores and prior stores against
subsequent stores. Note that this is weaker than smp_mb()! The
smp_mb__before_spinlock() primitive is free on many architectures.

Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/