Re: Current LKMM patch disposition

From: Andrea Parri
Date: Sat Feb 18 2023 - 21:05:16 EST


> One additional feedback I wanted to mention, regarding this paragraph
> under "WARNING":
> ===========
> The protections provided by READ_ONCE(), WRITE_ONCE(), and others are
> not perfect; and under some circumstances it is possible for the
> compiler to undermine the memory model. Here is an example. Suppose
> both branches of an "if" statement store the same value to the same
> location:
> r1 = READ_ONCE(x);
> if (r1) {
> WRITE_ONCE(y, 2);
> ... /* do something */
> } else {
> WRITE_ONCE(y, 2);
> ... /* do something else */
> }
> ===========
>
> I tried lots of different compilers with varying degrees of
> optimization, in all cases I find that the conditional instruction
> always appears in program order before the stores inside the body of
> the conditional. So I am not sure if this is really a valid concern on
> current compilers, if not - could you provide an example of a compiler
> and options that cause it?

The compiler cannot change the order in which the load and the store
appear in the program (these are "volatile accesses"); the concern is
that (quoting from the .txt) it "could list the stores out of the
conditional", thus effectively destroying the control dependency between
the load and the store (the load-store "reordering" could then be
performed by the uarch, under certain archs). For example, compare:

(for the C snippet)

void func(int *x, int *y)
{
int r1 = *(const volatile int *)x;

if (r1)
*(volatile int *)y = 2;
else
*(volatile int *)y = 2;
}

- arm64 gcc 11.3 -O1 gives:

func:
ldr w0, [x0]
cbz w0, .L2
mov w0, 2
str w0, [x1]
.L1:
ret
.L2:
mov w0, 2
str w0, [x1]
b .L1

- OTOH, arm64 gcc 11.3 -O2 gives:

func:
ldr w0, [x0]
mov w0, 2
str w0, [x1]
ret

- similarly, using arm64 clang 14.0.0 -O2,

func: // @func
mov w8, #2
ldr wzr, [x0]
str w8, [x1]
ret

I saw similar results using riscv, powerpc, x86 gcc & clang.

Andrea