Re: [PATCH 7/7] microblaze: Do atomic operations by using exclusive ops

From: Peter Zijlstra
Date: Wed Feb 12 2020 - 10:55:06 EST


On Wed, Feb 12, 2020 at 04:42:29PM +0100, Michal Simek wrote:
> From: Stefan Asserhall load and store <stefan.asserhall@xxxxxxxxxx>
>
> Implement SMP aware atomic operations.
>
> Signed-off-by: Stefan Asserhall <stefan.asserhall@xxxxxxxxxx>
> Signed-off-by: Michal Simek <michal.simek@xxxxxxxxxx>
> ---
>
> arch/microblaze/include/asm/atomic.h | 265 +++++++++++++++++++++++++--
> 1 file changed, 253 insertions(+), 12 deletions(-)
>
> diff --git a/arch/microblaze/include/asm/atomic.h b/arch/microblaze/include/asm/atomic.h
> index 41e9aff23a62..522d704fad63 100644
> --- a/arch/microblaze/include/asm/atomic.h
> +++ b/arch/microblaze/include/asm/atomic.h
> @@ -1,28 +1,269 @@
> /* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2013-2020 Xilinx, Inc.
> + */
> +
> #ifndef _ASM_MICROBLAZE_ATOMIC_H
> #define _ASM_MICROBLAZE_ATOMIC_H
>
> +#include <linux/types.h>
> #include <asm/cmpxchg.h>
> -#include <asm-generic/atomic.h>
> -#include <asm-generic/atomic64.h>
> +
> +#define ATOMIC_INIT(i) { (i) }
> +
> +#define atomic_read(v) READ_ONCE((v)->counter)
> +
> +static inline void atomic_set(atomic_t *v, int i)
> +{
> + int result, tmp;
> +
> + __asm__ __volatile__ (
> + /* load conditional address in %2 to %0 */
> + "1: lwx %0, %2, r0;\n"
> + /* attempt store */
> + " swx %3, %2, r0;\n"
> + /* checking msr carry flag */
> + " addic %1, r0, 0;\n"
> + /* store failed (MSR[C] set)? try again */
> + " bnei %1, 1b;\n"
> + /* Outputs: result value */
> + : "=&r" (result), "=&r" (tmp)
> + /* Inputs: counter address */
> + : "r" (&v->counter), "r" (i)
> + : "cc", "memory"
> + );
> +}
> +#define atomic_set atomic_set

Uuuuhh.. *what* ?!?

Are you telling me your LL/SC implementation is so bugger that
atomic_set() being a WRITE_ONCE() does not in fact work?