Re: Memory overcommitting (was Re: http://www.redhat.com/redhat/)

Kevin Littlejohn (darius@darius.wantree.com.au)
Fri, 21 Feb 1997 05:28:09 +0800


>
> While the whole chain has been somewhat interesting, I still believe that
> for a production environment, this is SUICIDE.
>
> (1) For the situation where large processes fork and immediately exec,
> many other *IX system have a "vfork" system call to handle the situation.
>

NAME
fork, vfork - create a child process

SYNOPSIS
#include <unistd.h>

pid_t fork(void);
pid_t vfork(void);

DESCRIPTION
fork creates a child process that differs from the parent
process only in its PID and PPID, and in the fact that
resource utilizations are set to 0. File locks and pend-
ing signals are not inherited.

Under Linux, fork is implemented using copy-on-write
pages, so the only penalty incurred by fork is the time
and memory required to duplicate the parent's page tables,
and to create a unique task structure for the child.

Under bugs, it mentions that vfork is an alias for fork, but given fork on
Linux does what vfork on other systems do, I believe you have your vfork :)

> (2) The claims that statically that this is okay 99.9% is hogwash. The
> airline industry may sell tickets that way, but they don't want their
> computers doing it. I've been in line when it has happened to them and
> its not a pretty site.

Fact remains - the process is going to segfault somewhere if it needs more
memory than it can get. Given, from the info in this thread, most *nix's
(especially modern versions) use this sort of 'overcommitting', I can't
see it's such a bad practice - especially when it allows you to run
programs (netscape springs to mind) that you would not normally be able to
run. As has been pointed out, if a program has absolutely catastrophic
results from dying in the middle rather than at the start, it should
request the memory actually be there (ie. walk the memory to make sure
it's allocated).
I'd be interested in what, if anything, POSIX has to say about memory
allocation - I'd be willing to bet nothing or very little, in which case
I'll contend relying on memory being _there_ just because you asked it be
available is SUICIDE (hey, I like these caps :) if you intend to write
portable programs.

>
> (3) This is sure a time waster for porting and developing programs to run
> on Linux. "Hey, you know that program that worked fine on System A could
> blow up on Linux." This is espcially true for those people who want
> to move binaries from some other vendor's OS.

hrm. Except, under a scheme that definately allocates that memory, the
program is going to blow up anyway. Think about it - the program blows up
because the memory is not there. So, if the memory is not there, is any
allocation routine going to let the program work? Your scheme: "Hey, you
know that program that worked fine on System A could not run on Linux."

Instead, we give the program a chance to allocate all the memory it might
want (a common, still-taught practice), then not touch and not use any of
that memory, without impacting on the execution of anything else on the
system. Seems to me to be a very friendly way of doing things.

Still, if you'd rather dedicate memory to a program based on the
programmers original decision of what they might need, you're free to
build/commission a new memory allocation system. Don't expect me to use
it, though - none of our three commercial servers would survive that sort
of scheme, and I doubt I could justify hunting down 128Mb sticks of memory
for them all. And I suppose that is the acid test - in this production
system, and in many, many other production systems, this scheme works
without a hitch :)

KevinL

---
Kevin Littlejohn                                     darius@wantree.com.au
Wantree Development                                       tel:    481 4433
Perth, Western Australia 6000                             fax:    481 0393
"Hours of frustration punctuated by moments of sheer terror" - a.s.r.