Re: Adding plain accesses and detecting data races in the LKMM

From: Paul E. McKenney
Date: Tue Apr 09 2019 - 11:01:45 EST


On Tue, Apr 09, 2019 at 03:36:18AM +0200, Andrea Parri wrote:
> > > The formula was more along the line of "do not assume either of these
> > > cases to hold; use barrier() is you need an unconditional barrier..."
> > > AFAICT, all current implementations of smp_mb__{before,after}_atomic()
> > > provides a compiler barrier with either barrier() or "memory" clobber.
> >
> > Well, we have two reasonable choices: Say that
> > smp_mb__{before,after}_atomic will always provide a compiler barrier,
> > or don't say this. I see no point in saying that the combination of
> > Before-atomic followed by RMW provides a barrier.
>
> ;-/ I'm fine with the first choice. I don't see how the second choice
> (this proposal/patch) would be consistent with some documentation and
> with the current implementations; for example,
>
> 1) Documentation/atomic_t.txt says:
>
> Thus:
>
> atomic_fetch_add();
>
> is equivalent to:
>
> smp_mb__before_atomic();
> atomic_fetch_add_relaxed();
> smp_mb__after_atomic();
>
> [...]
>
> 2) Some implementations of the _relaxed() variants do not provide any
> compiler barrier currently.

But don't all implementations of smp_mb__before_atomic() and
smp_mb__after_atomic() currently supply a compiler barrier?

Thanx, Paul