Re: [PATCH v2] barriers: introduce smp_mb__release_acquire and update documentation

From: Peter Zijlstra
Date: Fri Oct 09 2015 - 07:12:10 EST


On Fri, Oct 09, 2015 at 10:40:39AM +0100, Will Deacon wrote:
> > Which leads me to think I would like to suggest alternative rules for
> > RELEASE/ACQUIRE (to replace those Will suggested; as I think those are
> > partly responsible for my confusion).
>
> Yeah, sorry. I originally used the phrase "fully ordered" but changed it
> to "full barrier", which has stronger transitivity (newly understood
> definition) requirements that I didn't intend.

> Are we explicit about the difference between "fully ordered" and "full
> barrier" somewhere else, because this looks like it will confuse people.

I suspect we don't.

> > - RELEASE -> ACQUIRE can be upgraded to a full barrier (including
> > transitivity) using smp_mb__release_acquire(), either before RELEASE
> > or after ACQUIRE (but consistently [*]).
>
> Hmm, but we don't actually need this for RELEASE -> ACQUIRE, afaict. This
> is just needed for UNLOCK -> LOCK, and is exactly what RCU is currently
> using (for PPC only).

No, we do need that. RELEASE/ACQUIRE is RCpc for TSO as well as PPC.

UNLOCK/LOCK is only RCpc for PPC, the rest of the world has RCsc for
UNLOCK/LOCK.

The reason RELEASE/ACQUIRE differ from UNLOCK/LOCK is the fundamental
difference between ACQUIRE and LOCK.

Where ACQUIRE really is just a LOAD, LOCK ends up fundamentally being a
RmW and a control dependency.


Now, if you want to upgrade your RCpc RELEASE/ACQUIRE to RCsc, you need
to do that on the inside (either after ACQUIRE or before RELEASE), this
is crucial (as per Paul's argument) for the case where the RELEASE and
ACQUIRE happen on different CPUs.

IFF RELEASE and ACQUIRE happen on the _same_ CPU, then it doesn't
matter and you can place the barrier in any of the 3 possible locations
(before RELEASE, between RELEASE and ACQUIRE, after ACQUIRE).


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/