Re: [PATCH] lib/raid6: Add AVX2 optimized recovery functions

From: Andi Kleen
Date: Thu Nov 29 2012 - 15:09:59 EST


Jim Kukunas <james.t.kukunas@xxxxxxxxxxxxxxx> writes:
> +
> + /* ymm0 = x0f[16] */
> + asm volatile("vpbroadcastb %0, %%ymm7" : : "m" (x0f));
> +
> + while (bytes) {
> +#ifdef CONFIG_X86_64
> + asm volatile("vmovdqa %0, %%ymm1" : : "m" (q[0]));
> + asm volatile("vmovdqa %0, %%ymm9" : : "m" (q[32]));
> + asm volatile("vmovdqa %0, %%ymm0" : : "m" (p[0]));
> + asm volatile("vmovdqa %0, %%ymm8" : : "m" (p[32]));

This is somewhat dangerous to assume registers do not get changed
between assembler statements or assembler statements do not get
reordered. Better always put such values into explicit variables or
merge them into a single asm statement.

asm volatile is also not enough to prevent reordering. If anything
you would need a memory clobber.

-Andi


--
ak@xxxxxxxxxxxxxxx -- Speaking for myself only
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/