On Sat, Jul 19, 2025 at 2:15 PM Jesper Dangaard Brouer <hawk@xxxxxxxxxx> wrote:
On 18/07/2025 17.05, Matt Fleming wrote:Actually, we can side step this problem completely by consistently
[...][...]
diff --git a/tools/testing/selftests/bpf/progs/lpm_trie_bench.c b/tools/testing/selftests/bpf/progs/lpm_trie_bench.c
new file mode 100644
index 000000000000..c335718cc240
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/lpm_trie_bench.c
@@ -0,0 +1,175 @@
+For userspace includes we have similar defines in bench.h.
+static __always_inline void atomic_inc(long *cnt)
+{
+ __atomic_add_fetch(cnt, 1, __ATOMIC_SEQ_CST);
+}
+
+static __always_inline long atomic_swap(long *cnt, long val)
+{
+ return __atomic_exchange_n(cnt, val, __ATOMIC_SEQ_CST);
+}
Except they use __ATOMIC_RELAXED and here __ATOMIC_SEQ_CST.
Which is the correct to use?
For BPF kernel-side do selftests have another header file that define
these `atomic_inc` and `atomic_swap` ?
using __sync_fetch_and_add() for duration_ns and hits and removing the
atomic operations for DELETE, which doesn't need atomicity anyway
since only a single producer can run.
I'll send a v2.