Am I missing the point? I was under the impression that there is a
well-defined behavior when a Un*x reaches memory resource limits.
Additional requests for resources are simply denied: currently
executing processes don't get axed until they segfault, get signalled
or exit().
How does the population in user space grow? Only a handful of ways
I can think of:
- malloc(3), realloc, etc.
- fork(2)
- execve(2)
- ...
Each one will fail if there is insufficient virtual memory, so each
application is responsible for detecting NULL from malloc(), -1 from
fork(), and so on, giving it a chance to do other work or exit().
Is this First-Come, First-Served mechanism failing in the case of the
original poster? I suspect that once a few apps exit() from this kind
of starvation, there would still be enough room to run bash and do
sysadmin tasks.
BTW, when you really must have your server under a lighter load to
work properly, there are some obvious adjustments we could make,
including more RAM. Having an emergency swap file/partition would
be nice. On my production (2.0.xx) servers, I have hardware watchdogs
that reset the machine if a certain userland process stops responding
after a few minutes.
_____________________________ Stephen M. Benoit _______________________________
~ ~ | benoits@servicepro.com | B.Eng (Computer), M.Eng (Electrical)
('>') | Tel: +1 514 255-3550 | Service Providers of America INC
_ | FAX: +1 514 256-1356 | Web page: http://www.servicepro.com/
_______|________________________|______________________________________________
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.altern.org/andrebalsa/doc/lkml-faq.html