Note that you did the unfortunate "out-of-context" cut that I really want
to clarify, so that people are really aware of it.
It's really easy to generate a run-queue that is arbitrarily deep. So
before people tell me to run 50 processes that just have a "for (;;)" loop
in them, yes I know. And it's even "normal" behaviour under certain load,
especially in physics etc where you have a lot of very CPU-intensive
loads.
So when I say "seldom more than a few entries deep", it's not really
something I believe to be true all the time. The other part of the
equation is the "scheduling time is significant" - if the scheduling time
is not very significant then it doesn't matter how we handle the
run-queue.
So take my above assertion more on the lines of "it's almost unheard of to
have a deep run-queue _and_ a significant scheduling overhead". Because if
you have lots of runnable processes, scheduling itself is not hat the CPU
tends to be doing: most CPU-time by far is spent actually running the
processes themselves.
So my claim is really that we should optimize for the _few_ processes case
rather then for the many processes case. I know that's against what some
people are used to, but I just think that it's true. And that's why I
don't believe in multiple run-queues.
Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/