How does spin_unlock() in x86-64 align with the description in Documention/memory-barriers.txt?

From: Zhu Yanhai
Date: Fri Mar 22 2013 - 07:59:12 EST


Hi all,
In the documention it reads,

(2) UNLOCK operation implication:

Memory operations issued before the UNLOCK will be completed before the
UNLOCK operation has completed.

Memory operations issued after the UNLOCK may be completed before the
UNLOCK operation has completed.

However, on x86-64 __ticket_spin_unlock() merely does,

static __always_inline void __ticket_spin_unlock(raw_spinlock_t *lock)
{
asm volatile(
ALTERNATIVE(UNLOCK_LOCK_PREFIX"incb (%0);"ASM_NOP3,
UNLOCK_LOCK_ALT_PREFIX"movw $0, (%0)",
X86_FEATURE_UNFAIR_SPINLOCK)
:
: "Q" (&lock->slock)
: "memory", "cc");
}

While both UNLOCK_LOCK_PREFIX and UNLOCK_LOCK_ALT_PREFIX are empty
strings. So how such a function keeps the memory operations issued
before it completed?

--
Thanks,
Zhu Yanhai
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/