Re: How does spin_unlock() in x86-64 align with thedescription in Documention/memory-barriers.txt?

From: Jan Beulich
Date: Fri Mar 22 2013 - 08:15:13 EST


>>> On 22.03.13 at 12:58, Zhu Yanhai <zhu.yanhai@xxxxxxxxx> wrote:
> Hi all,
> In the documention it reads,
>
> (2) UNLOCK operation implication:
>
> Memory operations issued before the UNLOCK will be completed before the
> UNLOCK operation has completed.
>
> Memory operations issued after the UNLOCK may be completed before the
> UNLOCK operation has completed.
>
> However, on x86-64 __ticket_spin_unlock() merely does,
>
> static __always_inline void __ticket_spin_unlock(raw_spinlock_t *lock)
> {
> asm volatile(
> ALTERNATIVE(UNLOCK_LOCK_PREFIX"incb (%0);"ASM_NOP3,
> UNLOCK_LOCK_ALT_PREFIX"movw $0, (%0)",
> X86_FEATURE_UNFAIR_SPINLOCK)
> :
> : "Q" (&lock->slock)
> : "memory", "cc");
> }
>
> While both UNLOCK_LOCK_PREFIX and UNLOCK_LOCK_ALT_PREFIX are empty
> strings. So how such a function keeps the memory operations issued
> before it completed?

Please read the section "Memory Ordering in P6 and More Recent
Processor Families" in SDM Vol 3.

Jan

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/