Re: [PATCH v6 5/6] MCS Lock: Restructure the MCS lock defines andlocking code into its own file

From: Waiman Long
Date: Tue Oct 01 2013 - 16:01:26 EST


On 10/01/2013 12:48 PM, Tim Chen wrote:
On Mon, 2013-09-30 at 12:36 -0400, Waiman Long wrote:
On 09/30/2013 12:10 PM, Jason Low wrote:
On Mon, 2013-09-30 at 11:51 -0400, Waiman Long wrote:
On 09/28/2013 12:34 AM, Jason Low wrote:
Also, below is what the mcs_spin_lock() and mcs_spin_unlock()
functions would look like after applying the proposed changes.

static noinline
void mcs_spin_lock(struct mcs_spin_node **lock, struct mcs_spin_node *node)
{
struct mcs_spin_node *prev;

/* Init node */
node->locked = 0;
node->next = NULL;

prev = xchg(lock, node);
if (likely(prev == NULL)) {
/* Lock acquired. No need to set node->locked since it
won't be used */
return;
}
ACCESS_ONCE(prev->next) = node;
/* Wait until the lock holder passes the lock down */
while (!ACCESS_ONCE(node->locked))
arch_mutex_cpu_relax();
smp_mb();
I wonder if a memory barrier is really needed here.
If the compiler can reorder the while (!ACCESS_ONCE(node->locked)) check
so that the check occurs after an instruction in the critical section,
then the barrier may be necessary.

In that case, just a barrier() call should be enough.
The cpu could still be executing out of order load instruction from the
critical section before checking node->locked? Probably smp_mb() is
still needed.

Tim

But this is the lock function, a barrier() call should be enough to prevent the critical section from creeping up there. We certainly need some kind of memory barrier at the end of the unlock function.

-Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/