Re: Interesting scheduling times - NOT

Larry McVoy (lm@bitmover.com)
Tue, 22 Sep 1998 09:43:46 -0600


: No, your claim is that my testcode is flawed. I have used both pipe
: and yielding techniques and I get similar variances. You claim that
: because you don't see the variances and I do, that my testcode is
: flawed. It doesn't work that way. Just because you don't measure it
: and I do doesn't mean my test is flawed. Your testing environment may
: be different than mine.

Unless you are running your test on a multi user system with lots of
background activity (which would be insane), there is not any difference.
I run my tests on a machine running X, with a perf monitor that updates
every second, etc., and I don't see anything like what you are seeing.

: It only takes one run to lower the minimum. In your test, taking the
: median makes you insensitive to the effect I described.

Just to put to rest the idea that maybe the median is covering things
up, here's the full set of data, note the small standard deviation:

2 7.58 (7.74 7.65 7.63 7.60 7.60 7.58 7.58 7.58 7.55 7.54 7.53)

Here's the same thing with each run taking 500 milliseconds (so a total of
about 6 seconds of run time):

2 7.73 (8.38 8.14 8.13 8.00 7.94 7.73 7.62 7.60 7.46 7.33 7.04)

: Note: in my tests, I see substantial variance mainly with the process
: switching test, not the thread switching test. This is particularly
: the case now that Linus posted the FPU saving fix.
: On a PPro 180 I'm seeing minimum process switch times of 4.8 us to
: 8.5 us. That's a 77% increase. I think that variance is real, and not
: an artefact of my test code.

See the above numbers and explain the lack of variance.

: No, again, my benchmark is not flawed. Look, you are trying to do
: something different with your benchmark. Your focus is to compare
: between different OSes and to see what the "normal" context switch
: time is.

It's perfectly fine that you want to do something else. I have no
problem with your goals but serious problems with your methodology.
The problem is based an apples to apples comparison: when you run your
pipe version with no background processes, you should be able to duplicate
my results very closely. But you can't - you get this huge variance.
Until that part of your benchmark is fixed, I, for one, am unwilling
to even consider any other part of your results - I have no reason to
believe them and a substantial reason not to believe them.

: that case seeing cache-induced variance is good, because it can expose

How many times before it sinks in: 77% variance is not cache induced. If
that were true, then nothing would be deterministic. You wouldn't be able
to say "time make" and expect to get anything like the same number two times
in a row, yet people do that all the time.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/