[PATCH] x86-64: handle byte-wise tail copying in memcpy()without a loop

From: Jan Beulich
Date: Thu Jan 26 2012 - 10:54:21 EST


While hard to measure, reducing the number of possibly/likely
mis-predicted branches can generally be expected to be slightly better.

Other than apparent at the first glance, this also doesn't grow the
function size (the alignment gap to the next function just gets
smaller).

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

---
arch/x86/lib/memcpy_64.S | 19 ++++++++++---------
1 file changed, 10 insertions(+), 9 deletions(-)

--- 3.3-rc1/arch/x86/lib/memcpy_64.S
+++ 3.3-rc1-x86_64-memcpy-tail/arch/x86/lib/memcpy_64.S
@@ -169,18 +169,19 @@ ENTRY(memcpy)
retq
.p2align 4
.Lless_3bytes:
- cmpl $0, %edx
- je .Lend
+ subl $1, %edx
+ jb .Lend
/*
* Move data from 1 bytes to 3 bytes.
*/
-.Lloop_1:
- movb (%rsi), %r8b
- movb %r8b, (%rdi)
- incq %rdi
- incq %rsi
- decl %edx
- jnz .Lloop_1
+ movzbl (%rsi), %ecx
+ jz .Lstore_1byte
+ movzbq 1(%rsi), %r8
+ movzbq (%rsi, %rdx), %r9
+ movb %r8b, 1(%rdi)
+ movb %r9b, (%rdi, %rdx)
+.Lstore_1byte:
+ movb %cl, (%rdi)

.Lend:
retq



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/