Re: [tip:locking/urgent] locking/xchg/alpha: Clean up barrier usage by using smp_mb() in place of __ASM__MB

From: Will Deacon
Date: Mon Feb 26 2018 - 13:05:59 EST


Hi Andrea,

I know this is in mainline now, but I think the way you've got the barriers
here:

On Fri, Feb 23, 2018 at 12:27:54AM -0800, tip-bot for Andrea Parri wrote:
> diff --git a/arch/alpha/include/asm/cmpxchg.h b/arch/alpha/include/asm/cmpxchg.h
> index 46ebf14aed4e..8a2b331e43fe 100644
> --- a/arch/alpha/include/asm/cmpxchg.h
> +++ b/arch/alpha/include/asm/cmpxchg.h
> @@ -6,7 +6,6 @@
> * Atomic exchange routines.
> */
>
> -#define __ASM__MB
> #define ____xchg(type, args...) __xchg ## type ## _local(args)
> #define ____cmpxchg(type, args...) __cmpxchg ## type ## _local(args)
> #include <asm/xchg.h>
> @@ -33,10 +32,6 @@
> cmpxchg_local((ptr), (o), (n)); \
> })
>
> -#ifdef CONFIG_SMP
> -#undef __ASM__MB
> -#define __ASM__MB "\tmb\n"
> -#endif
> #undef ____xchg
> #undef ____cmpxchg
> #define ____xchg(type, args...) __xchg ##type(args)
> @@ -64,7 +59,6 @@
> cmpxchg((ptr), (o), (n)); \
> })
>
> -#undef __ASM__MB
> #undef ____cmpxchg
>
> #endif /* _ALPHA_CMPXCHG_H */
> diff --git a/arch/alpha/include/asm/xchg.h b/arch/alpha/include/asm/xchg.h
> index e2660866ce97..e1facf6fc244 100644
> --- a/arch/alpha/include/asm/xchg.h
> +++ b/arch/alpha/include/asm/xchg.h
> @@ -28,12 +28,12 @@ ____xchg(_u8, volatile char *m, unsigned long val)
> " or %1,%2,%2\n"
> " stq_c %2,0(%3)\n"
> " beq %2,2f\n"
> - __ASM__MB
> ".subsection 2\n"
> "2: br 1b\n"
> ".previous"
> : "=&r" (ret), "=&r" (val), "=&r" (tmp), "=&r" (addr64)
> : "r" ((long)m), "1" (val) : "memory");
> + smp_mb();
>
> return ret;

ends up adding unnecessary barriers to the _local variants, which the
previous code took care to avoid. That's why I suggesting adding
the smp_mb() into the cmpxchg macro rather than the ____cmpxchg variants.

I think it's worth spinning another patch to fix this properly.

Will