On Wed, Aug 06, 2003 at 02:48:11PM -0400, Timothy Miller wrote:
> Here's a kooky idea...
>
> I'm not sure about this detail, but I would guess that the int
> schedulers are trying to determine relatively stable priority values for
> processes. A process does not instantly come to its correct priority
> level, because it gets there based on accumulation of behavioral patterns.
>
> Well, it occurs to me that we could benefit from situations where
> priority changes are underdamped. The results would sometimes be an
> oscillation in priority levels. In the short term, a given process may
> be given different amounts of CPU time when it is run, although in the
> long term, it should average out.
>
Possibly a good idea...
> At the same time, certain tasks can only be judged correctly over the
> long term, like X, for example. Its long-term behavior is interactive,
> but now and then, it will become a CPU hog, and we want to LET it.
>
Possibly, but getting this detection right isn't easy. There
are many other cases where processes such as the compiler do
so much disk access that it is interactive for some time, but
we definitely don't want it to become a hog at interactive
priority - ever.
There are also those who think that if X ever has so much to do
(usually not because of graphichs acceleration) then it had
better wait because the X user isn't the only one waiting anyway.
(I.e. the machine is also a web/db/login server
or some such - so other people are waiting as well.)
> The idea I'm proposing, however poorly formed, is that if we allow some
> "excessive" oscillation early on in the life of a process, we may be
> able to more quickly get processes to NEAR its correct priority, OR get
> its CPU time over the course of three times being run for the
> underdamped case to be about the same as it would be if we knew in
> advance what the priority should be. But in the underdamped case, the
> priority would continue to oscillate up and down around the correct
> level, because we are intentionally overshooting the mark each time we
> adjust priority.
More oscillation in the start than later will behave as you describe,
wether we want that is another issue. Processes _can_
change between being hogs and interactive, and we want the
scheduler to pick up that fact quickly.
Example: a word processor is mostly interactive. Sometimes it
does a heavy job like typesetting an entire book though, this is cpu
intensive and should schedule as such even if the word processor
has been "interactive" for a week. (I never close it, linux
is so stable...) It should do its heavy work with low priority
so it don't interfere with the interactive work (or gameplaying)
I do while waiting for the .pdf to appear.
I definitely don't want a 5-minute
typesetting run to monopolize the cpu just because the task
has been sooo nice over the last week.
Helge Hafting
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Thu Aug 07 2003 - 22:00:34 EST