Re: spin_lock implicit/explicit memory barrier

From: Davidlohr Bueso
Date: Thu Aug 11 2016 - 14:32:17 EST


On Thu, 11 Aug 2016, Peter Zijlstra wrote:

On Wed, Aug 10, 2016 at 04:29:22PM -0700, Davidlohr Bueso wrote:

(1) As Manfred suggested, have a patch 1 that fixes the race against mainline
with the redundant smp_rmb, then apply a second patch that gets rid of it
for mainline, but only backport the original patch 1 down to 3.12.

I have not followed the thread closely, but this seems like the best
option. Esp. since 726328d92a42 ("locking/spinlock, arch: Update and fix
spin_unlock_wait() implementations") is incomplete, it relies on at
least 6262db7c088b ("powerpc/spinlock: Fix spin_unlock_wait()") to sort
PPC.

Yeah, and we'd also need the arm bits; which reminds me, aren't alpha
ldl_l/stl_c sequences also exposed to this delaying of the publishing
when a non-owner peeks at the lock? Right now sysv sem's would be busted
when doing either is_locked or unlock_wait, shouldn't these be pimped up
to full smp_mb()s?

Thanks,
Davidlohr