RE: [RFC PATCH v2] riscv: Add Zawrs support for spinlocks

From: David Laight
Date: Fri Jun 24 2022 - 04:52:50 EST


From: Christoph Muellner
> Sent: 23 June 2022 16:30
>
> From: Christoph Müllner <christoph.muellner@xxxxxxxx>
>
> The current RISC-V code uses the generic ticket lock implementation,
> that calls the macros smp_cond_load_relaxed() and smp_cond_load_acquire().
> Currently, RISC-V uses the generic implementation of these macros.
> This patch introduces a RISC-V specific implementation, of these
> macros, that peels off the first loop iteration and modifies the waiting
> loop such, that it is possible to use the WRS.STO instruction of the Zawrs
> ISA extension to stall the CPU.
>
> The resulting implementation of smp_cond_load_*() will only work for
> 32-bit or 64-bit types for RV64 and 32-bit types for RV32.
> This is caused by the restrictions of the LR instruction (RISC-V only
> has LR.W and LR.D). Compiler assertions guard this new restriction.
>
> This patch uses the existing RISC-V ISA extension framework
> to detect the presents of Zawrs at run-time.
> If available a NOP instruction will be replaced by WRS.NTO or WRS.STO.
>
> The whole mechanism is gated by Kconfig setting, which defaults to Y.
>
> The Zawrs specification can be found here:
> https://github.com/riscv/riscv-zawrs/blob/main/zawrs.adoc
>
> Note, that the Zawrs extension is not frozen or ratified yet.
> Therefore this patch is an RFC and not intended to get merged.
>
> Changes since v1:
> * Adding "depends on !XIP_KERNEL" to RISCV_ISA_ZAWRS
> * Fixing type checking code in __smp_load_reserved*
> * Adjustments according to the specification change
>
> Signed-off-by: Christoph Müllner <christoph.muellner@xxxxxxxx>
> ---
> arch/riscv/Kconfig | 11 ++++
> arch/riscv/include/asm/barrier.h | 92 ++++++++++++++++++++++++++++
> arch/riscv/include/asm/errata_list.h | 19 +++++-
> arch/riscv/include/asm/hwcap.h | 3 +-
> arch/riscv/kernel/cpu.c | 1 +
> arch/riscv/kernel/cpufeature.c | 13 ++++
> 6 files changed, 136 insertions(+), 3 deletions(-)
>
> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index 32ffef9f6e5b..9d40569237c9 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -358,6 +358,17 @@ config RISCV_ISA_C
>
> If you don't know what to do here, say Y.
>
> +config RISCV_ISA_ZAWRS
> + bool "Zawrs extension support"
> + depends on !XIP_KERNEL
> + select RISCV_ALTERNATIVE
> + default y
> + help
> + Adds support to dynamically detect the presence of the Zawrs extension
> + (wait for reservation set) and enable its usage.
> +
> + If you don't know what to do here, say Y.
> +
> config RISCV_ISA_SVPBMT
> bool "SVPBMT extension support"
> depends on 64BIT && MMU
> diff --git a/arch/riscv/include/asm/barrier.h b/arch/riscv/include/asm/barrier.h
> index d0e24aaa2aa0..1f9628aaa7cb 100644
> --- a/arch/riscv/include/asm/barrier.h
> +++ b/arch/riscv/include/asm/barrier.h
> @@ -12,6 +12,8 @@
>
> #ifndef __ASSEMBLY__
>
> +#include <asm/errata_list.h>
> +
> #define nop() __asm__ __volatile__ ("nop")
>
> #define RISCV_FENCE(p, s) \
> @@ -42,6 +44,64 @@ do { \
> ___p1; \
> })
>
> +#if __riscv_xlen == 64
> +
> +#define __riscv_lrsc_word(t) \
> + (sizeof(t) == sizeof(int) || \
> + sizeof(t) == sizeof(long))

That line doesn't need splitting.

> +#elif __riscv_xlen == 32
> +
> +#define __riscv_lrsc_word(t) \
> + (sizeof(t) == sizeof(int))

Can't you use the same test as 64bit?
Both int and long wave size 4 - so the test is fine.

> +
> +#else
> +#error "Unexpected __riscv_xlen"
> +#endif /* __riscv_xlen */
> +
> +#define compiletime_assert_atomic_lrsc_type(t) \
> + compiletime_assert(__riscv_lrsc_word(t), \
> + "Need type compatible with LR/SC instructions.")

I think I'd try to get the type name into the error message.
Either ##t or STR(t) should be right.

> +
> +#define ___smp_load_reservedN(pfx, ptr) \
> +({ \
> + typeof(*ptr) ___p1; \
> + __asm__ __volatile__ ("lr." pfx " %[p], %[c]\n" \
> + : [p]"=&r" (___p1), [c]"+A"(*ptr)); \
> + ___p1; \
> +})

Isn't that missing the memory reference?
It either needs a extra memory parameter for 'ptr' or
a full/partial memory clobber.

> +
> +#define ___smp_load_reserved32(ptr) \
> + ___smp_load_reservedN("w", ptr)
> +
> +#define ___smp_load_reserved64(ptr) \
> + ___smp_load_reservedN("d", ptr)
> +
> +#define __smp_load_reserved_relaxed(ptr) \
> +({ \
> + typeof(*ptr) ___p1; \
> + compiletime_assert_atomic_lrsc_type(*ptr); \
> + if (sizeof(*ptr) == 4) { \
> + ___p1 = ___smp_load_reserved32(ptr); \
> + } else { \
> + ___p1 = ___smp_load_reserved64(ptr); \
> + } \

I think replacing all that with:
typeof(*ptr) ___p1; \
if (sizeof(*ptr) == sizeof(int)) \
___p1 = ___smp_load_reservedN("w", ptr); \
else if (sizeof(*ptr) == sizeof(long)) \
___p1 = ___smp_load_reservedN("d", ptr); \
else
compiletime_assert(1, \
"Need type compatible with LR/SC instructions.");

Might make it easier to read.

> + ___p1; \
> +})
> +
> +#define __smp_load_reserved_acquire(ptr) \
> +({ \
> + typeof(*ptr) ___p1; \
> + compiletime_assert_atomic_lrsc_type(*ptr); \
> + if (sizeof(*ptr) == 4) { \
> + ___p1 = ___smp_load_reserved32(ptr); \
> + } else { \
> + ___p1 = ___smp_load_reserved64(ptr); \
> + } \

Isn't that identical to __smp_load_reserved_relaxed()?
No point replicating it.
...

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)