Re: [PATCH] locking/atomic: cmpxchg: Make __generic_cmpxchg_local compare against zero-extended 'old' value

From: Arnd Bergmann
Date: Sun Feb 26 2023 - 05:13:41 EST


On Wed, Feb 1, 2023, at 19:39, Matt Evans wrote:
> __generic_cmpxchg_local takes unsigned long old/new arguments which
> might end up being up-cast from smaller signed types (which will
> sign-extend). The loaded compare value must be compared against a
> truncated smaller type, so down-cast appropriately for each size.
>
> The issue is apparent on 64-bit machines with code, such as
> atomic_dec_unless_positive(), that sign-extends from int.
>
> 64-bit machines generally don't use the generic cmpxchg but
> development/early ports might make use of it, so make it correct.
>
> Signed-off-by: Matt Evans <mev@xxxxxxxxxxxx>

Hi Matt,

I'm getting emails about nios2 sparse warnings from the
kernel test robot about your patch. I can also reproduce
this on armv5:


fs/erofs/zdata.c: note: in included file (through /home/arnd/arm-soc/arch/arm/include/asm/cmpxchg.h, /home/arnd/arm-soc/arch/arm/include/asm/atomic.h, /home/arnd/arm-soc/include/linux/atomic.h, ...):
include/asm-generic/cmpxchg-local.h:29:33: warning: cast truncates bits from constant value (5f0ecafe becomes fe)
include/asm-generic/cmpxchg-local.h:33:34: warning: cast truncates bits from constant value (5f0ecafe becomes cafe)
include/asm-generic/cmpxchg-local.h:29:33: warning: cast truncates bits from constant value (5f0ecafe becomes fe)
include/asm-generic/cmpxchg-local.h:30:42: warning: cast truncates bits from constant value (5f0edead becomes ad)
include/asm-generic/cmpxchg-local.h:33:34: warning: cast truncates bits from constant value (5f0ecafe becomes cafe)
include/asm-generic/cmpxchg-local.h:34:44: warning: cast truncates bits from constant value (5f0edead becomes dead)

This was already warning for the 'new' cast, but now also warns
for the 'old' cast, so the bot thinks this is a new problem.

I managed to shut up the warning by using a binary '&' operator
instead of the cast, but I wonder if it would be better to do
also mask this in the caller, when arch_atomic_cmpxchg() with its
signed argument calls into arch_cmpxchg() with its unsigned argument:

diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h
index 04b8be9f1a77..e271d6708c87 100644
--- a/include/asm-generic/atomic.h
+++ b/include/asm-generic/atomic.h
@@ -130,7 +130,7 @@ ATOMIC_OP(xor, ^)
#define arch_atomic_read(v) READ_ONCE((v)->counter)
#define arch_atomic_set(v, i) WRITE_ONCE(((v)->counter), (i))

-#define arch_atomic_xchg(ptr, v) (arch_xchg(&(ptr)->counter, (v)))
-#define arch_atomic_cmpxchg(v, old, new) (arch_cmpxchg(&((v)->counter), (old), (new)))
+#define arch_atomic_xchg(ptr, v) (arch_xchg(&(ptr)->counter, (u32)(v)))
+#define arch_atomic_cmpxchg(v, old, new) (arch_cmpxchg(&((v)->counter), (u32)(old), (u32)(new)))

#endif /* __ASM_GENERIC_ATOMIC_H */
diff --git a/include/asm-generic/cmpxchg-local.h b/include/asm-generic/cmpxchg-local.h
index c3e7315b7c1d..f9d52d1f0472 100644
--- a/include/asm-generic/cmpxchg-local.h
+++ b/include/asm-generic/cmpxchg-local.h
@@ -26,20 +26,20 @@ static inline unsigned long __generic_cmpxchg_local(volatile void *ptr,
raw_local_irq_save(flags);
switch (size) {
case 1: prev = *(u8 *)ptr;
- if (prev == (u8)old)
- *(u8 *)ptr = (u8)new;
+ if (prev == (old & 0xff))
+ *(u8 *)ptr = (new & 0xffu);
break;
case 2: prev = *(u16 *)ptr;
- if (prev == (u16)old)
- *(u16 *)ptr = (u16)new;
+ if (prev == (old & 0xffffu))
+ *(u16 *)ptr = (new & 0xffffu);
break;
case 4: prev = *(u32 *)ptr;
- if (prev == (u32)old)
- *(u32 *)ptr = (u32)new;
+ if (prev == (old & 0xffffffffu))
+ *(u32 *)ptr = (new & 0xffffffffu);
break;
case 8: prev = *(u64 *)ptr;
if (prev == old)
- *(u64 *)ptr = (u64)new;
+ *(u64 *)ptr = new;
break;
default:
wrong_size_cmpxchg(ptr);


Arnd