Re: [GIT PULL] x86/asm changes for v5.6

From: Linus Torvalds
Date: Tue Jan 28 2020 - 14:52:32 EST


On Tue, Jan 28, 2020 at 8:59 AM Ingo Molnar <mingo@xxxxxxxxxx> wrote:
>
> - Add support for "Fast Short Rep Mov", which is available starting with
> Ice Lake Intel CPUs - and make the x86 assembly version of memmove()
> use REP MOV for all sizes when FSRM is available.

Pulled. However, this seems rather non-optimal:

ALTERNATIVE "cmp $0x20, %rdx; jb 1f", "", X86_FEATURE_FSRM
ALTERNATIVE "", "movq %rdx, %rcx; rep movsb; retq", X86_FEATURE_ERMS

in that it leaves unnecessary NOP's there as alternatives.

We have "ALTERNATIVE_2", so we can do the above in one thing, _and_
move the rep-movsq testing code into there too:

ALTERNATIVE_2 \
"cmp $680, %rdx ; jb 3f ; cmpb %dil, %sil; je 4f", \
"movq %rdx, %rcx ; rep movsb; retq", X86_FEATURE_FSRM, \
"cmp $0x20, %rdx; jb 1f; movq %rdx, %rcx; rep movsb;
retq", X86_FEATURE_ERMS

which avoids unnecessary nops.

I dunno. It doesn't much matter, but we _do_ have things to do for
all three cases, and it actually makes sense to move all the three
"use rep movs" cases into the ALTERNATIVE. No?

UNTESTED patch attached, but visually it seems to generate better code
and less unnecessary nops (I get just two bytes of nop with this for
the nonFSRM/ERMS case)

If somebody tests this out and commits it and writes a commit message,
they can claim authorship. Or add my sign-off.

Linus
diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S
index 7ff00ea64e4f..e42bf35b9b62 100644
--- a/arch/x86/lib/memmove_64.S
+++ b/arch/x86/lib/memmove_64.S
@@ -39,23 +39,19 @@ SYM_FUNC_START(__memmove)
cmp %rdi, %r8
jg 2f

- /* FSRM implies ERMS => no length checks, do the copy directly */
+ /*
+ * Three rep-string alternatives:
+ * - go to "movsq" for large regions where source and dest are
+ * mutually aligned (same in low 8 bits). "label 4"
+ * - plain rep-movsb for FSRM
+ * - rep-movs for > 32 byte for ERMS.
+ */
.Lmemmove_begin_forward:
- ALTERNATIVE "cmp $0x20, %rdx; jb 1f", "", X86_FEATURE_FSRM
- ALTERNATIVE "", "movq %rdx, %rcx; rep movsb; retq", X86_FEATURE_ERMS
+ ALTERNATIVE_2 \
+ "cmp $680, %rdx ; jb 3f ; cmpb %dil, %sil; je 4f", \
+ "movq %rdx, %rcx ; rep movsb; retq", X86_FEATURE_FSRM, \
+ "cmp $0x20, %rdx; jb 1f; movq %rdx, %rcx; rep movsb; retq", X86_FEATURE_ERMS

- /*
- * movsq instruction have many startup latency
- * so we handle small size by general register.
- */
- cmp $680, %rdx
- jb 3f
- /*
- * movsq instruction is only good for aligned case.
- */
-
- cmpb %dil, %sil
- je 4f
3:
sub $0x20, %rdx
/*