Re: [PATCHv9 02/16] x86/alternatives: Disable LASS when patching kernel alternatives
From: Sohil Mehta
Date: Thu Jul 24 2025 - 22:36:22 EST
On 7/9/2025 9:58 AM, Dave Hansen wrote:
>> + * Avoid using memcpy() here. Instead, open code it.
>> + */
>> + asm volatile("rep movsb"
>> + : "+D" (dst), "+S" (src), "+c" (len) : : "memory");
>> +
>> + lass_clac();
>> }
>
> This didn't turn out great. At the _very_ least, we could have a:
>
> inline_memcpy_i_really_mean_it()
>
It looks like we should go back to __inline_memcpy()/_memset()
implementation that PeterZ had initially proposed. It seems to fit all
the requirements, right? Patch attached.
https://lore.kernel.org/lkml/20241028160917.1380714-3-alexander.shishkin@xxxxxxxxxxxxxxx/
> with the rep mov. Or even a #define if we were super paranoid the
> compiler is out to get us.
>
> But _actually_ open-coding inline assembly is far too ugly to live.
> From eb3b45b377df90d3b367e2b3fddfff1a72624a4e Mon Sep 17 00:00:00 2001
From: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Date: Mon, 28 Oct 2024 18:07:50 +0200
Subject: [PATCH] x86/asm: Introduce inline memcpy and memset
Provide inline memcpy and memset functions that can be used instead of
the GCC builtins whenever necessary.
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Signed-off-by: Sohil Mehta <sohil.mehta@xxxxxxxxx>
---
arch/x86/include/asm/string.h | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
diff --git a/arch/x86/include/asm/string.h b/arch/x86/include/asm/string.h
index c3c2c1914d65..9cb5aae7fba9 100644
--- a/arch/x86/include/asm/string.h
+++ b/arch/x86/include/asm/string.h
@@ -1,6 +1,32 @@
/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_X86_STRING_H
+#define _ASM_X86_STRING_H
+
#ifdef CONFIG_X86_32
# include <asm/string_32.h>
#else
# include <asm/string_64.h>
#endif
+
+static __always_inline void *__inline_memcpy(void *to, const void *from, size_t len)
+{
+ void *ret = to;
+
+ asm volatile("rep movsb"
+ : "+D" (to), "+S" (from), "+c" (len)
+ : : "memory");
+ return ret;
+}
+
+static __always_inline void *__inline_memset(void *s, int v, size_t n)
+{
+ void *ret = s;
+
+ asm volatile("rep stosb"
+ : "+D" (s), "+c" (n)
+ : "a" ((uint8_t)v)
+ : "memory");
+ return ret;
+}
+
+#endif /* _ASM_X86_STRING_H */
--
2.43.0