Re: Context switch times

From: Linus Torvalds (torvalds@transmeta.com)
Date: Thu Oct 04 2001 - 17:42:37 EST


In article <20011004175526.C18528@redhat.com>,
Benjamin LaHaise <bcrl@redhat.com> wrote:
>On Thu, Oct 04, 2001 at 02:52:39PM -0700, David S. Miller wrote:
>> So the FPU hit is only before/after the runs, not during each and
>> every iteration.
>
>Right. Plus, the original mail mentioned that it was hitting all 8
>CPUs, which is a pretty good example of braindead scheduler behaviour.

Careful.

That's not actually true (the braindead part, that i).

We went through this with Ingo and Larry McVoy, and the sad fact is that
to get the best numbers for lmbench, you simply have to do the wrong
thing.

Could we try to hit just two? Probably, but it doesn't really matter,
though: to make the lmbench scheduler benchmark go at full speed, you
want to limit it to _one_ CPU, which is not sensible in real-life
situations. The amount of concurrency in the context switching
benchmark is pretty small, and does not make up for bouncing the locks
etc between CPU's.

However, that lack of concurrency in lmbench is totally due to the
artificial nature of the benchmark, and the bigger-footprint scheduling
stuff (that isn't reported very much in the summary) are more realistic.

So 2.4.x took the (painful) road of saying that we care less about that
particular benchmark than about some other more realistic loads.

                Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Sun Oct 07 2001 - 21:00:34 EST