Re: Schedule idle

yodaiken@chelm.cs.nmt.edu
Wed, 11 Nov 1998 06:46:39 -0700 (MST)


>
> On Wed, 11 Nov 1998, Richard Gooch wrote:
>
> > even if we rip out the RT scheduling classes and
> > not include SCHED_IDLE, the sleep-with-lock bug still presents a
> > problem. A heavily niced process (nice 19) which messes with the FS
> > can significantly degrade the performance of the FS for other
> > processes. Imagine the following scenario:
> >
> > - nice 19 process bangs on FS
> > - nice 4 process eats CPU
> > - normal users try to use FS.
> >
> > Here we don't get deadlock, but we do get lockouts while the nice 19
> > process goes to sleep holding a resource lock. Eventually the user
> > processes will get access to the FS again, but only after a long wait.
>
> Andrea, Victor, what should we do about this, instead
> of ridiculing things like SCHED_{FIFO,RR,IDLE} in the
> normal kernel?

I don't think this is a problem. Things work out the way that they are
supposed to work and the delays are simply what happens when you have a
heavily used system. The fix, in this case, is to get more memory for a
bigger buffer cache or to get a faster drive.

> Maybe something like a flag (p->locks_held) which
> is upped on every lock acquired and lowered on each
> lock unlocked? Then we can give a certain scheduling
> preference to a process holding a lock so that the
> lock will be freed with huge certainty when
> (the process with the lock != last). That way deadlocks
> will be no longer than the time between two or three
> schedules.

There are two obstacles to making this work. First, there are, as Ingo
noted, all sorts of locks scattered all through the kernel proper and
in drivers. So it would be a big deal to find them all, fix them all, and
maintain this feature through code changes. The second problem is that it
will cause all sorts of potential deadlocks and scheduling oddnessess that
will not be pleasant. If you want to see why i think this in some detail,
look at the paper on priority inheritance that is on my web site.
I'm happy to hear explanations of why I am wrong, but so far, I have not
seen any.

Finally, I thought about all this for a very long time before I came up with
the RTLinux idea. My conclusion was that it was not reasonable to believe
that I could change the underlying design paradigm of Linux and that it was
better to get a clean separation of RT and non RT by using the trick that
I use. Of course, just because I thought about it a long time is no assurance
that I was right, but there are seem to me to be fundamental tradeoffs that
are completely different in RT and non-RT environments. On the other hand,
there is no reason why Linux processes cannot be made to schedule more
efficiently or with a better algorithm.
Also, there is no reason why RTLinux can't be made more convenient to use.
You might want to look at Modcomp to see what is involved in making UNIXs
themselves real-time -- its a huge effort and that slows down development
of the main uses of the kernl.

>
> This could mean abandoning the low-latency multiple
> runqueue idea by Richard though, unless we make for
> an even more complex scheme. This is a shame, Richard's
> scheme really was an advantage for low-overhead switching.
>
> Btw, I've just come up with a new scheduling class
> (not in code, consider yourself happy) especially
> for multimedia apps. The idea is to give them a large
> bonus in goodness() but to charge them double on CPU
> used. This way those apps get better response time
> than normal apps but only half the CPU (when having
> to compete full-time). We could make that class
> available to normal users because the app only gains
> in responsiveness when it uses less CPU than half it's
> fair share. If it uses more it will have worse
> performance.

I assigned a project to some of the students in my OS class this fall to
make loadable module schedulers so they can test different algorithms.
Not sure if any will get it done, but it seems like a good way to get some
data on what to do.

>
> cheers,
>
> Rik -- slowly getting used to dvorak kbd layout...
> +-------------------------------------------------------------------+
> | Linux memory management tour guide. H.H.vanRiel@phys.uu.nl |
> | Scouting Vries cubscout leader. http://www.phys.uu.nl/~riel/ |
> +-------------------------------------------------------------------+
>

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/