Re: Memory overcommitting (was Re: http://www.redhat.com/redhat/)

Andi Kleen (andi@mlm.extern.lrz-muenchen.de)
20 Feb 1997 07:29:23 +0100


John Wyszynski <wyszynsk@clark.net> writes:

> SUICIDE! No wonder Linux gets such a rap for being unreliable. (If this is
> truely how things work. Someone please tell me this isn't so.)
> As a programmer I expect that when I have sucessfully requested memory to
> be allocated, that it really has happened. It now appears that on top of
> everything else that writing to memory at the wrong time, I could run of
> virtual memory.

Since 2.0.x Linux catches memory overcommits. See this code in mm/mmap.c:

/*
* Check that a process has enough memory to allocate a
* new virtual mapping.
*/
static inline int vm_enough_memory(long pages)
{
/*
* stupid algorithm to decide if we have enough memory: while
* simple, it hopefully works in most obvious cases.. Easy to
* fool it, but this should catch most mistakes.
*/
long freepages;
freepages = buffermem >> PAGE_SHIFT;
freepages += page_cache_size;
freepages >>= 1;
freepages += nr_free_pages;
freepages += nr_swap_pages;
freepages -= MAP_NR(high_memory) >> 4;
return freepages > pages;
}

Actually some people don't like this, e.g. it's a very common practice
in scientific FORTRAN programsdeclare very big arrays and use only
a small part of them (because FORTRAN has no equivalent to malloc()).
You can always limit the size of your processes with ulimit -v.

-Andi