Re: [PATCH v2] arch/powerpc/include/asm/barrier.h: redefine rmb and wmb to lwsync

From: Christophe Leroy
Date: Wed Feb 22 2023 - 04:53:47 EST




Le 22/02/2023 à 10:46, Kautuk Consul a écrit :
>>
>> Reviewed-by: Christophe Leroy <christophe.leroy@xxxxxxxxxx>
> Thanks!
>>
>>> ---
>>> arch/powerpc/include/asm/barrier.h | 7 +++++++
>>> 1 file changed, 7 insertions(+)
>>>
>>> diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
>>> index b95b666f0374..e088dacc0ee8 100644
>>> --- a/arch/powerpc/include/asm/barrier.h
>>> +++ b/arch/powerpc/include/asm/barrier.h
>>> @@ -36,8 +36,15 @@
>>> * heavy-weight sync, so smp_wmb() can be a lighter-weight eieio.
>>> */
>>> #define __mb() __asm__ __volatile__ ("sync" : : : "memory")
>>> +
>>> +/* The sub-arch has lwsync. */
>>> +#if defined(CONFIG_PPC64) || defined(CONFIG_PPC_E500MC)
>>> +#define __rmb() __asm__ __volatile__ ("lwsync" : : : "memory")
>>> +#define __wmb() __asm__ __volatile__ ("lwsync" : : : "memory")
>>
>> I'd have preferred with 'asm volatile' though.
> Sorry about that! That wasn't the intent of this patch.
> Probably another patch series should change this manner of #defining
> assembly.

Why adding new line wrong then have to have another patch to make them
right ?

When you build a new house in an old village, you first build your house
with old materials and then you replace everything with new material ?