[PATCH v7 4/6] MCS Lock: Barrier corrections
From: Tim Chen
Date: Thu Jan 16 2014 - 19:08:44 EST
This patch corrects the way memory barriers are used in the MCS lock
with smp_load_acquire and smp_store_release fucnction.
It removes ones that are not needed.
Note that using the smp_load_acquire/smp_store_release pair is not
sufficient to form a full memory barrier across
cpus for many architectures (except x86) for mcs_unlock and mcs_lock.
For applications that absolutely need a full barrier across multiple cpus
with mcs_unlock and mcs_lock pair, smp_mb__after_unlock_lock() should be
used after mcs_lock if a full memory barrier needs to be guaranteed.
From: Waiman Long <Waiman.Long@xxxxxx>
Suggested-by: Michel Lespinasse <walken@xxxxxxxxxx>
Signed-off-by: Waiman Long <Waiman.Long@xxxxxx>
Signed-off-by: Jason Low <jason.low2@xxxxxx>
Signed-off-by: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx>
kernel/locking/mcs_spinlock.c | 18 +++++++++++++-----
1 file changed, 13 insertions(+), 5 deletions(-)
diff --git a/kernel/locking/mcs_spinlock.c b/kernel/locking/mcs_spinlock.c
index 44fb092..6cdc730 100644
@@ -43,9 +43,12 @@ void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node)
ACCESS_ONCE(prev->next) = node;
- /* Wait until the lock holder passes the lock down */
- while (!ACCESS_ONCE(node->locked))
+ * Wait until the lock holder passes the lock down.
+ * Using smp_load_acquire() provides a memory barrier that
+ * ensures subsequent operations happen after the lock is acquired.
+ while (!(smp_load_acquire(&node->locked)))
@@ -68,7 +71,12 @@ void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *node)
while (!(next = ACCESS_ONCE(node->next)))
- ACCESS_ONCE(next->locked) = 1;
+ * Pass lock to next waiter.
+ * smp_store_release() provides a memory barrier to ensure
+ * all operations in the critical section has been completed
+ * before unlocking.
+ smp_store_release(&next->locked, 1);
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/