Re: spin_lock implicit/explicit memory barrier

From: Benjamin Herrenschmidt
Date: Tue Aug 09 2016 - 20:06:08 EST


On Tue, 2016-08-09 at 20:52 +0200, Manfred Spraul wrote:
> Hi Benjamin, Hi Michael,
>
> regarding commit 51d7d5205d33 ("powerpc: Add smp_mb() toÂ
> arch_spin_is_locked()"):
>
> For the ipc/sem code, I would like to replace the spin_is_locked() withÂ
> a smp_load_acquire(), see:
>
> http://git.cmpxchg.org/cgit.cgi/linux-mmots.git/tree/ipc/sem.c#n367
>
> http://www.ozlabs.org/~akpm/mmots/broken-out/ipc-semc-fix-complex_count-vs-simple-op-race.patch
>
> To my understanding, I must now add a smp_mb(), otherwise it would beÂ
> broken on PowerPC:
>
> The approach that the memory barrier is added into spin_is_locked()Â
> doesn't work because the code doesn't use spin_is_locked().
>
> Correct?

Right, otherwise you aren't properly ordered. The current powerpc locks provide
good protection between what's inside vs. what's outside the lock but not vs.
the lock *value* itself, so if, like you do in the sem code, use the lock
value as something that is relevant in term of ordering, you probably need
an explicit full barrier.

Adding Paul McKenney.

Cheers,
Ben.