Re: [PATCH 1/2] locking/lockref: Use try_cmpxchg64 in CMPXCHG_LOOP macro

From: Linus Torvalds
Date: Wed May 25 2022 - 12:48:12 EST


On Wed, May 25, 2022 at 7:40 AM Uros Bizjak <ubizjak@xxxxxxxxx> wrote:
>
> Use try_cmpxchg64 instead of cmpxchg64 in CMPXCHG_LOOP macro.
> x86 CMPXCHG instruction returns success in ZF flag, so this
> change saves a compare after cmpxchg (and related move instruction
> in front of cmpxchg). The main loop of lockref_get improves from:

Ack on this one regardless of the 32-bit x86 question.

HOWEVER.

I'd like other architectures to pipe up too, because I think right now
x86 is the only one that implements that "arch_try_cmpxchg()" family
of operations natively, and I think the generic fallback for when it
is missing might be kind of nasty.

Maybe it ends up generating ok code, but it's also possible that it
just didn't matter when it was only used in one place in the
scheduler.

The lockref_get() case can be quite hot under some loads, it would be
sad if this made other architectures worse.

Anyway, maybe that try_cmpxchg() fallback is fine, and works out well
on architectures that use load-locked / store-conditional as-is.

But just to verify, I'm adding arm/powerpc/s390/mips people to the cc. See

https://lore.kernel.org/all/20220525144013.6481-2-ubizjak@xxxxxxxxx/

for the original email and the x86-64 code example.

Linus