ulimit

Greg Alexander (galexand@sietch.bloomington.in.us)
Sun, 15 Dec 1996 00:23:52 -0500 (EST)


I have one significant problem with ulimit. As is, it saves me from some
nasty attacks...I begged people on IRC's #hack/#2600 to hack my box. One
of them got frustrated and ran your typical main() { while (1) {
malloc(100000); fork(); }}. I couldn't start any new proggies -- not
enough mem to load the libs. luckily I already had a root shell going,
and kill is apparently statically linked or doesn't require any libs that
weren't already loaded (killall didn't work). So I managed to use
CTRL-SCROLLOCk to get a process list, kill -STOP them, then kill -9 enough
till I can killall them.
I managed to recover from this because I had lshell and a process
limit of 50. However, my memory limit of 10M did nothing to save this.
there may be a bug in my libc.so.5.4.13 that allows this, I'm not certain.
It doesn't matter because the user spent considerable time bothering to
send me a special libc.so.5 which, I assume, didn't check the limits at
all.
Luckily, process limit is enforced in the kernel. I believe that
mem limit should also be enforced in the kernel. Also, something that
refuses mallocs when there is less than 1M free for all non-root processes
would be really nifty (though this probably should be user-configurable).
It seems to already reserve 64k, so this may already be a configurable
option.
Anyways, I'm gonna be digging around in the kernel source a bit
and hopefully have some patches out tomorrow. I just sent this message to
see if anyone had any opinions on this subject, or maybe some info on
previous attempts.

Greg Alexander
http://www.cia-g.com/~sietch/