Andrea Arcangeli wrote:
> >I don't know how you reach this conclusion. Can you explain?
> >
> >I can see that if you never receive requests during holdoff then of
> >course you lose. [..]
>
> You answered your question yourself.
>
> Consider the worst case of the heuristic. Its worts case is definitely the
> common simple case. Harming it is a no way IMHO.
That's what I don't see. That the worst case is the _common_ simple
case. Sure it's the simpler case, but I see random paging as a common
case which is the best case for this heuristic.
You _might_ be right, but do you know for sure? I.e. has it ever been
measured?
Two more things:
- Using numbers plucked from the air, the heuristic's worst case will
reduce throughput by at most 2%, and the best case can improve
throughput by several times. So the "typical average behaviour" may
improve even if the heuristic doesn't reorder requests most of the
time.
- You can make it self-tuning w.r.t. whether it is worth doing
according to recent access patterns. To do this, if a request
arrives within 100us if a long seek request being issued, and the
request which arrived would have been a short seek, then the
heuristic would have been a gain. It may be that monitoring the
occurrence of that indicates the future likelihood of the heuristic
being a gain. (Again, need's measurement).
In this way even the worst case you are thinking of is avoided.
-- Jamie
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Wed Mar 15 2000 - 21:00:17 EST