Re: [PATCH] x86: combine memmove FSRM and ERMS alternatives

From: Borislav Petkov
Date: Sat Jan 14 2023 - 11:17:42 EST


On Sat, Jan 14, 2023 at 11:42:13AM +0100, Borislav Petkov wrote:
> Or, altenatively (pun intended), you can do what copy_user_generic() does and
> move all that logic into C and inline asm. Which I'd prefer, actually, instead of
> doing ugly asm hacks. Depends on how ugly it gets...

Alternatively #2, you can do the below as a minimal fix for stable along with
explaining what we're doing there and why and then do the other things I
suggested - if you'd like, that is - later and with no pressure.

Thx.

---
diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S
index 02661861e5dd..d6ffb4164cdb 100644
--- a/arch/x86/lib/memmove_64.S
+++ b/arch/x86/lib/memmove_64.S
@@ -38,10 +38,9 @@ SYM_FUNC_START(__memmove)
cmp %rdi, %r8
jg 2f

- /* FSRM implies ERMS => no length checks, do the copy directly */
.Lmemmove_begin_forward:
ALTERNATIVE "cmp $0x20, %rdx; jb 1f", "", X86_FEATURE_FSRM
- ALTERNATIVE "", "jmp .Lmemmove_erms", X86_FEATURE_ERMS
+ ALTERNATIVE "cmp $0x20, %rdx; jb 1f", "jmp .Lmemmove_erms", X86_FEATURE_ERMS

/*
* movsq instruction have many startup latency


--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette