This scheme would seem to be even more hazardous for these machines. They
would
probably suffer even more "random" failures. It's even possible that a 16Mb
process
that forks is going to need it's own 16Mb, and will just fail at some random
point
in the future. I'm having a real hard time understanding how such a unreliable
ting can be "useful."
> > BSD does something similar to this (though not all that well) in that
> > all memory allocations have their swap space allocated at request time.
> > Any request for which swap space cannot be assigned is failed. This
> > is efficient speed-wise, but very inneficient in terms of resources,
> > as it does not allow for a system with less swap space than RAM to use
> > all of it's RAM.
>
> I'm not sure I understand either the logic or wisdom of doing
> that, but in any event, since stacks always grow dynamically, you could
> never make a Linux system guarantee that memory is available.
Every other UNIX I know of, and for that matter non-UNIX, do this in some
manner.
The BSD scheme is/was inefficent as memory sizes have grown, but it was
designed
when few machines in the world had as much as 100 Kbytes of memory. The
semantics
of stack overflows are fairly easy to predict and most UNIX system have
mechanisms
in place to handle these in a reasonable manner.
> > It would seem to me to be fairly simple and inexpensive to simply keep
> > track of the current total commitment for each process, and a sum for
> > the system, and fail any allocation that pushes the system into an
> > overcommited state. This is not foolproof of course, eg if swap space
> > is removed from the system, then you could end up overcommitted, but
> > it seems to me that we would want a system that is running out of virtual
> > memory to fail gracefully, by failing allocation requests, rather than
> > having it fail in some other fashion, say by getting seg faults in
> > processes that are accessing memory that has been allocated to them.
>
> Oh, I disagree -- on behalf of all the people who don't have 128Mb
> of RAM and 256Mb of swap. You don't realize how high the total
> (theoretical) commitment of a typical system is.
People who own Yugo's should expect to be able to win the Dayton 500 either.
The
best you can hope for is that you don't get killed when the engine blows.