Re: Memory overcommitting (was Re: http://www.redhat.com/redhat/)

Richard B. Johnson (root@analogic.com)
Wed, 19 Feb 1997 20:58:07 -0500 (EST)


On Wed, 19 Feb 1997, Illuminati Primus wrote:

> >From what I understand, this is so that a gigantic process that fork()s
> and then exec()s wont fail even if we don't have enough space for another
> copy of that gigantic process (when we only really need enough for that
> smaller process)... I was wondering, why not make a forkexec() function
> that never wastes the time actually forking the parent process, but just
> allocates enough for the child? Is there a better way to do it? How much
> would this break?
>
>
Linux doesn't copy the entire process space on a fork() until something
gets written to the new copy. In addition, it only copies a page at
a time so if you fork then exec, overwriting the child, the memory used
is only the size of the program that was exec'd.

It is possible to write bad code that will force the entire parent's
virtural address space to be cloned, but you have to work at it. When
you exec, it shrinks again to the exec'd program's requirements.

Cheers,
Dick Johnson
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Richard B. Johnson
Project Engineer
Analogic Corporation
Voice : (508) 977-3000 ext. 3754
Fax : (508) 532-6097
Modem : (508) 977-6870
Ftp : ftp@boneserver.analogic.com
Email : rjohnson@analogic.com, johnson@analogic.com
Penguin : Linux version 2.1.26 on an i586 machine (66.15 BogoMips).
Warning : It's hard to remain at the trailing edge of technology.
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-