Re: Two questions regarding LINUX SMP "goals"

Robert G. Brown (rgb@phy.duke.edu)
Fri, 19 Mar 1999 13:17:39 -0500 (EST)


On Fri, 19 Mar 1999, Douglas W. St.Clair wrote:

> There are no panaceas and any implementation of an SMP Kernel is going to
> favor some types of applications over others. Is there a "specification" or
> "goal" written down somewhere that tells what types of applications
> LINUX/SMP is intended to favor?

I think that the answer is "whatever Linus wants":-) and is a moving
target. If you look over the list archives, you'll see that upon
occasion he has articulated SOME of those goals, but they are part of a
work in progress and obviously change with the changing terrain of
systems capabilities and individual vision. I think that is one of
Linux's strengths, actually -- it isn't working doggedly toward a
specific goal, it is evolving to optimize "system function" (a rapidly
changing and somewhat subjective criterion) against a wide and rapidly
varying terrain of underlying hardware architectures.

> is performance or stability the more preferred goal? Where stability means
> applications will not require recoding as the SMP Kernel matures.

I don't want to speak for Linus (and am interested to see if he speaks
for himself -- it is useful to see from time to time what his current
design thoughts are:-) but in the past he has indicated a number of
times that interactive performance has long been a key design optimax
goal (he's been willing to sacrifice being the best "number crunching"
OS to remain the best interactive response OS the few times and places
the two have needed to been traded off -- processor affinity, for
example). However, this isn't a fair answer. It's not like poor
performance on numerical code is something to be tolerated either -- it
is a question of how to best balance competing tradeoffs and not the
sort of thing anybody sane would give a religious answer to. It's a
judgement thing, not a dogma.

There are now and will probably remain two simultaneous generations of
the kernel -- the stable revision and the development revision. I hope
that it is obvious that neither of these can be "more" preferred. If
you prefer stability, DOS can probably still be purchased -- somewhere.
If you prefer performance, there are bleeding edge experimental designs
to play with that break all the time (or you can take any existing
design and do the experments and breaking yourself:-). linux provides
both, with the advanced kernel pulling and informing the older one and
with the clear idea that one day the new will supercede the old and
another generation of new begun.

What reason is there to believe that any kernel will EVER mature to the
point where it (or applications written to run under it) never need
recoding? No kernel ever has -- one a decade old is old and hoary and
needs to be put to sleep. We are all living and working on the leading
edge of what has to be the most volatile technical field of all time. I
can't think of a single aspect of any OS kernel that won't need to be
completely rewritten every five to seven years, although there may well
be little chunks here and there that are algorithmically sound and
survive the advent of the (always arriving) Next Generation.

What can one do as one goes from 8->16->32->64->??? bit CPU's and data
paths? As system clocks scale from 5 MHz up to 500MHz and beyond? As
the ratio of memory speed to cache speed to CPU speed (and the relative
sizes of each!) keep changing? As we go from single CPU systems to SMP
systems with a single spinlock to SMP systems with refined spinlocks to
distributed collections of UP or SMP systems with distributed, refined
spinlocks and CC-NUMA or whatever you like? As more pipelines and LU's
appear on single CPUs? The whole IDEA of the CPU may radically change
in the next five years, and an operating system will need to be
flexible, not dogmatic, to survive.

There are two or three layers of isolation NOW between user applications
and the kernel -- POSIX systems calls, for example, so that one doesn't
need to know the details of the kernel to write software that will run
on it. The libraries that implement those calls are different on each
systems (hardware or OS) continuously changing and will continue to do
so forever. The decision to keep the primary interfaces stable WRT
applications programs is less a kernel design issue than a standards
issue -- Gnu broke a lot of things going from libc 5 to libc 6 (glibc)
but Linus had "nothing" to do with this. As posix and other standards
compliance becomes more universal, this should be less of a problem.

One will always be left with change and the introduction of new
functionality, though, even in the "standard" libraries.

rgb

Robert G. Brown http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567 Fax: 919-660-2525 email:rgb@phy.duke.edu

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/