Re: [PATCH 27/33] locking/atomic: powerpc: move to ARCH_ATOMIC

From: Mark Rutland
Date: Tue May 11 2021 - 05:16:45 EST


On Mon, May 10, 2021 at 10:37:47AM +0100, Mark Rutland wrote:
> We'd like all architectures to convert to ARCH_ATOMIC, as once all
> architectures are converted it will be possible to make significant
> cleanups to the atomics headers, and this will make it much easier to
> generically enable atomic functionality (e.g. debug logic in the
> instrumented wrappers).
>
> As a step towards that, this patch migrates powerpc to ARCH_ATOMIC. The
> arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
> code wraps these with optional instrumentation to provide the regular
> functions.
>
> Signed-off-by: Mark Rutland <mark.rutland@xxxxxxx>
> Cc: Benjamin Herrenschmidt <benh@xxxxxxxxxxxxxxxxxxx>
> Cc: Boqun Feng <boqun.feng@xxxxxxxxx>
> Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx>
> Cc: Paul Mackerras <paulus@xxxxxxxxx>
> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
> Cc: Will Deacon <will@xxxxxxxxxx>
> ---
> arch/powerpc/Kconfig | 1 +
> arch/powerpc/include/asm/atomic.h | 140 +++++++++++++++++++------------------
> arch/powerpc/include/asm/cmpxchg.h | 30 ++++----
> 3 files changed, 89 insertions(+), 82 deletions(-)

The kbuild test robot spotted a couple of bits I'd got wrong; I've noted
those below (and both are now fixed in my kernel.org branch).

> static __always_inline bool
> -atomic_try_cmpxchg_lock(atomic_t *v, int *old, int new)
> +arch_atomic_try_cmpxchg_lock(atomic_t *v, int *old, int new)

Since this isn't part of the core atomic API, and is used directly by
powerpc's spinlock implementation, this should have stayed as-is (or we
should use the `arch_` prefix consitently and update the spinlock code).

I've dropped the `arch_` prefix for now.

[...]

> /**
> * atomic64_fetch_add_unless - add unless the number is a given value
> @@ -518,7 +524,7 @@ static __inline__ s64 atomic64_dec_if_positive(atomic64_t *v)
> * Atomically adds @a to @v, so long as it was not @u.
> * Returns the old value of @v.
> */
> -static __inline__ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
> +static __inline__ s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
> {
> s64 t;
>
> @@ -539,7 +545,7 @@ static __inline__ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
>
> return t;
> }
> -#define atomic64_fetch_add_unless atomic64_fetch_add_unless
> +#define arch_atomic64_fetch_add_unless atomic64_fetch_add_unless

Looks like I forgot the `arch_` prefix on the right hand side here; this
should have been:

#define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless

Thanks,
Mark.