Re: Memory overcommitting

Floody (flood@evcom.net)
Thu, 20 Feb 1997 00:10:50 -0500 (EST)


-----BEGIN PGP SIGNED MESSAGE-----

On Wed, 19 Feb 1997, Sean Farley wrote:

> Let's say that you wrote an application that must have control over any
> demise of the application, i.e. database. Since we cannot make the
> assumption that malloc will protect us from ourselves or another ruthless
> program, what course should someone take to handle a more graceful
> shutdown if the situation comes about? How does Linux kill off processes
> when the memory is exhausted?
>
> Sean
> -------
> scf@tctc.com
>

Sean,

Perhaps the better question here is: Is your hypothetical database system
practical in real-world terms? While you can't be *absolutely* positive
about the allocations you have requested being completely valid, your
application should already be protecting itself against other non-memory
related problems (bugs, etc). In such, a reliable database will use some
sort of transaction/commit paradigm, whereby transactions that are
dependant upon each other are logged until they can be successfully
written at a "commit" point. This log can be used to "roll back" or "roll
forward" in the event of a fatal error, be it a bug in your code, power
failure or segfaulting due to a memory allocation overcommit. I know what
you are thinking: It may very well be necessary to have small allocations
be *absolutely* available during application-atomic sections of the commit
stage. This can be accomplished by simply making sure that the memory you
are using to perform this operation to has been previously written to.
Once a 4k page has been written to, it belongs to you, and you will never
receive a segfault for attempting any sort of operation on it. It may get
swapped out, but it won't get handed out to anyone else.

In summary, while many developers may feel slightly insecure about the
lack of absolutely committed memory allocations, the whole truth of the
matter lies in the fact that it _really_ isn't a problem at all. I've run
Linux systems as bone dry as I could (VM wise), due to runaway processes
and lack of a great deal of physical ram and swap space. I received NO
segfaults, just an incredible thrashing affect, because in order to
actually reach the point where a new page request couldn't be satisfied
the amount of swapping occuring made the system completely unusable (took
me 10 minutes to actually manage to type "shutdown -r now")

+-------------------------------------------------------------------+
+ -- Finger: flood@evcom.net for my PGP public key -- +
+-------------------------------------------------------------------+

-----BEGIN PGP SIGNATURE-----
Version: 2.6.2

iQCVAwUBMwvc3BsjWkWelde9AQHZjwP+LXeUUdVIXsp75oCzr7LgYSyRvXs/5WEN
oiNLzTMz76WVQbhge1jDZAuuYj3VEItHV/esPtDr7DeQ9kjJsVQppjoTt/vLlWVf
cEYSkaFOjk7uc8dicpvpSeKCNBG7gwgrMJS4xHFoteI3Yk1hDbNVGM6I1UqMdnzr
+pwTobu8AQk=
=V0X6
-----END PGP SIGNATURE-----