Re: Current LKMM patch disposition
From: Joel Fernandes
Date: Sat Feb 18 2023 - 21:59:03 EST
On Sat, Feb 18, 2023 at 9:05 PM Andrea Parri <parri.andrea@xxxxxxxxx> wrote:
>
> > One additional feedback I wanted to mention, regarding this paragraph
> > under "WARNING":
> > ===========
> > The protections provided by READ_ONCE(), WRITE_ONCE(), and others are
> > not perfect; and under some circumstances it is possible for the
> > compiler to undermine the memory model. Here is an example. Suppose
> > both branches of an "if" statement store the same value to the same
> > location:
> > r1 = READ_ONCE(x);
> > if (r1) {
> > WRITE_ONCE(y, 2);
> > ... /* do something */
> > } else {
> > WRITE_ONCE(y, 2);
> > ... /* do something else */
> > }
> > ===========
> >
> > I tried lots of different compilers with varying degrees of
> > optimization, in all cases I find that the conditional instruction
> > always appears in program order before the stores inside the body of
> > the conditional. So I am not sure if this is really a valid concern on
> > current compilers, if not - could you provide an example of a compiler
> > and options that cause it?
>
> The compiler cannot change the order in which the load and the store
> appear in the program (these are "volatile accesses"); the concern is
> that (quoting from the .txt) it "could list the stores out of the
> conditional", thus effectively destroying the control dependency between
> the load and the store (the load-store "reordering" could then be
> performed by the uarch, under certain archs). For example, compare:
>
> (for the C snippet)
>
> void func(int *x, int *y)
> {
> int r1 = *(const volatile int *)x;
>
> if (r1)
> *(volatile int *)y = 2;
> else
> *(volatile int *)y = 2;
> }
>
> - arm64 gcc 11.3 -O1 gives:
>
> func:
> ldr w0, [x0]
> cbz w0, .L2
> mov w0, 2
> str w0, [x1]
> .L1:
> ret
> .L2:
> mov w0, 2
> str w0, [x1]
> b .L1
>
> - OTOH, arm64 gcc 11.3 -O2 gives:
>
> func:
> ldr w0, [x0]
> mov w0, 2
> str w0, [x1]
> ret
>
> - similarly, using arm64 clang 14.0.0 -O2,
>
> func: // @func
> mov w8, #2
> ldr wzr, [x0]
> str w8, [x1]
> ret
>
> I saw similar results using riscv, powerpc, x86 gcc & clang.
*Sigh*, yeah and even if the conditional logic is not fully stripped
out as in your example, I still see arm64 perform stores in
program-order before ever checking the branch condition:
int prog(void)
{
int r1 = *(const volatile int *)x;
if (r1) {
*(volatile int *)y = 1;
} else {
*(volatile int *)y = 1;
*(volatile int *)z = 2;
}
}
becomes with armv8 clang and -Os:
prog:
adrp x8, x
ldrsw x8, [x8, :lo12:x]
adrp x9, y
mov w10, #1
ldr w8, [x8]
ldrsw x9, [x9, :lo12:y]
str w10, [x9]
cbz w8, .LBB0_2 // LOL
ret
.LBB0_2:
adrp x8, z
ldrsw x8, [x8, :lo12:z]
mov w9, #2
str w9, [x8]
ret
Anyway sorry for the noise... I believe I did not have the right set
of compiler options yesterday..
Thanks!
- Joel