Re: Memory overcommitting (was Re: http://www.redhat.com/redhat/)

Evan Jeffrey (ejeffrey@eliot82.wustl.edu)
Thu, 20 Feb 1997 22:52:14 -0600


>Thanks to all who have lobbed missiles at me, especially those who believe
>that they known all that can be known. I simply cannot respond to them all.
>If this method of allocating memory is indeed as wide spread as some have
>claimed, it hasn't been going on as long as some of you "experts" claim.
>It is clear that some people have different design "goals" than others. This
>does not mean that your's is the right answer for everyone else.

I think that it is reasonable to expect programs that need to be reliable to
trap SIG_SEGV and exit (somewhat) gracefully. If they do, how different is
that from malloc returning a 0? If a program can't get the memory it needs,
chances are that it will exit. If it is really important, dirty the pages.

As for fork, while a major cause of memory allocation, not the only one.
With the linux copy on write, I can malloc a big chunk of memory and use it
as needed. I can also mmap a 400 MB file and only use memory for the pages
I write to.

>It may be the explanation why in the last few years I have seen so many
>programs die for no cause in the middle of the day. (On non-Linux systems
>so far.) In an operational environment, such havoc is not appreciated.

If you run out of memory, you are going to have havoc one way or another.
Unless the system was very heavily loaded, this is doubtful. If it was
loaded so heavily, something was going to go wrong, anyway.

Would a /proc twiddle for the algorithm to determine if enough space is free
satisfy you? Someone suggested that.

===
Evan Jeffrey
ejeffrey@eliot82.wustl.edu

Just once, I wish we would encounter an alien menace that wasn't
immune to bullets.
-- The Brigadier, "Dr. Who"