Re: [PATCH 2/3] locking: Clarify requirements for smp_mb__after_spinlock()

From: Alan Stern
Date: Thu Jun 28 2018 - 09:50:04 EST


On Thu, 28 Jun 2018, Andrea Parri wrote:

> --- a/include/linux/spinlock.h
> +++ b/include/linux/spinlock.h
> @@ -114,29 +114,8 @@ do { \
> #endif /*arch_spin_is_contended*/
>
> /*
> - * This barrier must provide two things:
> - *
> - * - it must guarantee a STORE before the spin_lock() is ordered against a
> - * LOAD after it, see the comments at its two usage sites.
> - *
> - * - it must ensure the critical section is RCsc.
> - *
> - * The latter is important for cases where we observe values written by other
> - * CPUs in spin-loops, without barriers, while being subject to scheduling.
> - *
> - * CPU0 CPU1 CPU2
> - *
> - * for (;;) {
> - * if (READ_ONCE(X))
> - * break;
> - * }
> - * X=1
> - * <sched-out>
> - * <sched-in>
> - * r = X;
> - *
> - * without transitivity it could be that CPU1 observes X!=0 breaks the loop,
> - * we get migrated and CPU2 sees X==0.
> + * smp_mb__after_spinlock() provides a full memory barrier between po-earlier
> + * lock acquisitions and po-later memory accesses.

How about saying "provides the equivalent of a full memory barrier"?

The point being that smp_mb__after_spinlock doesn't have to provide an
actual barrier; it just has to ensure the behavior is the same as if a
full barrier were present.

Alan