Re: Adding plain accesses and detecting data races in the LKMM

From: Alan Stern
Date: Thu Apr 18 2019 - 16:19:32 EST


On Thu, 18 Apr 2019, Paul E. McKenney wrote:

> > Are you saying that on x86, atomic_inc() acts as a full memory barrier
> > but not as a compiler barrier, and vice versa for
> > smp_mb__after_atomic()? Or that neither atomic_inc() nor
> > smp_mb__after_atomic() implements a full memory barrier?
> >
> > Either one seems like a very dangerous situation indeed.
>
> If I am following the macro-name breadcrumb trails correctly, x86's
> atomic_inc() does have a compiler barrier. But this is an accident
> of implementation -- from what I can see, it is not required to do so.
>
> So smb_mb__after_atomic() is only guaranteed to order the atomic_inc()
> before B, not A. To order A before B in the above example, an
> smp_mb__before_atomic() is also needed.

Are you certain?

> But now that I look, LKMM looks to be stating a stronger guarantee:
>
> ([M] ; fencerel(Before-atomic) ; [RMW] ; po? ; [M]) |
> ([M] ; po? ; [RMW] ; fencerel(After-atomic) ; [M]) |
> ([M] ; po? ; [LKW] ; fencerel(After-spinlock) ; [M]) |
> ([M] ; po ; [UL] ; (co | po) ; [LKW] ;
> fencerel(After-unlock-lock) ; [M])
>
> Maybe something like this?
>
> ([M] ; fencerel(Before-atomic) ; [RMW] ; fencerel(After-atomic) ; [M]) |
> ([M] ; fencerel(Before-atomic) ; [RMW] |
> ( [RMW] ; fencerel(After-atomic) ; [M]) |
> ([M] ; po? ; [LKW] ; fencerel(After-spinlock) ; [M]) |
> ([M] ; po ; [UL] ; (co | po) ; [LKW] ;
> fencerel(After-unlock-lock) ; [M])

The first line you wrote is redundant; it follows from the second and
third lines.

Aside from that, while this proposal may be correct and may express
what smb_mb__{before|after}_atomic really are intended to do, it
contradicts Documentation/atomic_t.txt. That file says:

These barriers provide a full smp_mb().

And of course, a full smp_mb() would order everything before it against
everything after it. If we update the model then we should also update
that file.

In addition, it's noteworthy that smp_mb__after_spinlock and
smp_mb__after_unlock_lock do not behave in this way.

Alan