Re: [PATCH] locking: Remove an insn from spin and write locks

From: Peter Zijlstra
Date: Mon Aug 20 2018 - 11:56:59 EST


On Mon, Aug 20, 2018 at 08:50:02AM -0700, Matthew Wilcox wrote:
> On Mon, Aug 20, 2018 at 11:14:04AM -0400, Waiman Long wrote:
> > On 08/20/2018 11:06 AM, Matthew Wilcox wrote:
> > > Both spin locks and write locks currently do:
> > >
> > > f0 0f b1 17 lock cmpxchg %edx,(%rdi)
> > > 85 c0 test %eax,%eax
> > > 75 05 jne [slowpath]
> > >
> > > This 'test' insn is superfluous; the cmpxchg insn sets the Z flag
> > > appropriately. Peter pointed out that using atomic_try_cmpxchg()
> > > will let the compiler know this is true. Comparing before/after
> > > disassemblies show the only effect is to remove this insn.
> ...
> > > static __always_inline int queued_spin_trylock(struct qspinlock *lock)
> > > {
> > > + u32 val = 0;
> > > +
> > > if (!atomic_read(&lock->val) &&
> > > - (atomic_cmpxchg_acquire(&lock->val, 0, _Q_LOCKED_VAL) == 0))
> > > + (atomic_try_cmpxchg(&lock->val, &val, _Q_LOCKED_VAL)))
> >
> > Should you keep the _acquire suffix?
>
> I don't know ;-) Probably. Peter didn't include it as part of his
> suggested fix, but on reviewing the documentation, it seems likely that
> it should be retained. I put them back in and (as expected) it changes
> nothing on x86-64.

Yeah, _acquire should be retained; sorry about loosing that. I'm neck
deep into tlb invalidate stuff and wrote this without much thinking
involved.