Re: [PATCH] RISC-V: Break load reservations during switch_to

From: Marco Peereboom
Date: Thu Jun 06 2019 - 04:51:14 EST


Ah thatâs sneaky!!

> On Jun 6, 2019, at 12:17 AM, Palmer Dabbelt <palmer@xxxxxxxxxx> wrote:
>
> The comment describes why in detail. This was found because QEMU never
> gives up load reservations, the issue is unlikely to manifest on real
> hardware.
>
> Thanks to Carlos Eduardo for finding the bug!
>
> Signed-off-by: Palmer Dabbelt <palmer@xxxxxxxxxx>
> ---
> arch/riscv/kernel/entry.S | 18 ++++++++++++++++++
> 1 file changed, 18 insertions(+)
>
> diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
> index 1c1ecc238cfa..e9fc3480e6b4 100644
> --- a/arch/riscv/kernel/entry.S
> +++ b/arch/riscv/kernel/entry.S
> @@ -330,6 +330,24 @@ ENTRY(__switch_to)
> add a3, a0, a4
> add a4, a1, a4
> REG_S ra, TASK_THREAD_RA_RA(a3)
> + /*
> + * The Linux ABI allows programs to depend on load reservations being
> + * broken on context switches, but the ISA doesn't require that the
> + * hardware ever breaks a load reservation. The only way to break a
> + * load reservation is with a store conditional, so we emit one here.
> + * Since nothing ever takes a load reservation on TASK_THREAD_RA_RA we
> + * know this will always fail, but just to be on the safe side this
> + * writes the same value that was unconditionally written by the
> + * previous instruction.
> + */
> +#if (TASK_THREAD_RA_RA != 0)
> +# error "The offset between ra and ra is non-zero"
> +#endif
> +#if (__riscv_xlen == 64)
> + sc.d x0, ra, 0(a3)
> +#else
> + sc.w x0, ra, 0(a3)
> +#endif
> REG_S sp, TASK_THREAD_SP_RA(a3)
> REG_S s0, TASK_THREAD_S0_RA(a3)
> REG_S s1, TASK_THREAD_S1_RA(a3)
> --
> 2.21.0
>

Attachment: signature.asc
Description: Message signed with OpenPGP