Re: [PATCH 1/3] x86: Move msr accesses out of line

From: Andi Kleen
Date: Mon Feb 23 2015 - 12:43:58 EST


On Mon, Feb 23, 2015 at 06:04:36PM +0100, Peter Zijlstra wrote:
> On Fri, Feb 20, 2015 at 05:38:55PM -0800, Andi Kleen wrote:
>
> > This patch moves the MSR functions out of line. A MSR access is typically
> > 40-100 cycles or even slower, a call is a few cycles at best, so the
> > additional function call is not really significant.
>
> If I look at the below PDF a CALL+PUSH EBP+MOV RSP,RBP+ ... +POP+RET
> ends up being 5+1.5+0.5+ .. + 1.5+8 = 16.5 + .. cycles.

You cannot just add up the latency cycles. The CPU runs all of this
in parallel.

Latency cycles would only be interesting if these instructions were
on the critical path for computing the result, which they are not.

It should be a few cycles overhead.

BTW if you really worry about perf overhead you could
gain far more (in some cases ms) by applying
http://comments.gmane.org/gmane.linux.kernel/1805207

-Andi

--
ak@xxxxxxxxxxxxxxx -- Speaking for myself only
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/