Re: Memory overcommitting (was Re: http://www.redhat.com/redhat/)

David Schwartz (davids@wiznet.net)
Wed, 19 Feb 1997 15:31:11 -0500 (EST)


On Wed, 19 Feb 1997, Brett Hollon wrote:

> I can see a very serious problem with this. Just how can you tell
> the difference between a hardware failure, buffer overrun, ignored
> return value from malloc, (any other of a number of ways you can
> generate a seg fault through programming errors), and bumping into
> one of these over-committed memory areas?

You look at the kernel logs. Running out of virtual memory is a
precise hazard, just like the others you listed.

> It would seem prudent to at least track the amount of virtual memory
> that has been committed and not allow that figure to exceed the amount
> available (say the sum of the phys ram and swap space). In fact, I
> thought this is what was being done.

That's evil. The system at my desk is a very capable Linux
system, P200, 32Mb RAM and 130Mb swap. Many are not so capable, 16Mb RAM,
32Mb swap. Without overcommitment, these systems wouldn't be nearly as
useful as they are with it. If a process consuming 16Mb of virtual memory
forks, you'd have to have 16 more megs available or fail the fork. :(

> BSD does something similar to this (though not all that well) in that
> all memory allocations have their swap space allocated at request time.
> Any request for which swap space cannot be assigned is failed. This
> is efficient speed-wise, but very inneficient in terms of resources,
> as it does not allow for a system with less swap space than RAM to use
> all of it's RAM.

I'm not sure I understand either the logic or wisdom of doing
that, but in any event, since stacks always grow dynamically, you could
never make a Linux system guarantee that memory is available.

> It would seem to me to be fairly simple and inexpensive to simply keep
> track of the current total commitment for each process, and a sum for
> the system, and fail any allocation that pushes the system into an
> overcommited state. This is not foolproof of course, eg if swap space
> is removed from the system, then you could end up overcommitted, but
> it seems to me that we would want a system that is running out of virtual
> memory to fail gracefully, by failing allocation requests, rather than
> having it fail in some other fashion, say by getting seg faults in
> processes that are accessing memory that has been allocated to them.

Oh, I disagree -- on behalf of all the people who don't have 128Mb
of RAM and 256Mb of swap. You don't realize how high the total
(theoretical) commitment of a typical system is.

DS