Re: NT vulnerable to attack on CPU

bofh@snoopy.virtual.net.au
Tue, 24 Dec 96 10:53:33 +1000


>> You can convieniently remove users that do this
>> sort of thing from the system.

>Nothing made it into the syslog when I tried this exploit. Suppose I'd done
>it on a system where there were 50 users logged in, and I'd put the little
>(<4kB) binary in /tmp, /tmp were on a ramdisk, and I didn't leave fork.c lying
>around. Alternatively, suppose that I innocently wrote some really bad code
>and ran it from my unprivileged account. I'd feel more comfortable if it
>weren't possible to crash the whole system that way.

I have written and run test programs such as that and not had any problems.
Your oops indicates that there is some bug in process handling in the kernel
you are running. Other versions of the Linux kernel do not have this bug.
At one time I installed a patch in my kernel to limit the number of
processes a non-root user can run. When I got ulimit working properly I
discarded the patch, but it has occurred to me that perhaps I should start
using it again. The reason is that there are a lot of system processes that
create processes on behalf of users (pop3 server is one example). What I am
seriously considering doing is limiting the number of processes for any user
with UID > 100. It seems to me that on most if not all Linux distributions the
first 100 UIDs are reserved for system processes (web servers etc) and 100+ are
for users. So I could set my system up so that the following command would
result in fork failing if the user already has 20 or more processes: #echo 20 >
/proc/sys/procs-per-user

What do you think?

>I'm using tcsh. In /etc/login.defs I have:

>ULIMIT 2097152

>and the system has 64 MB RAM, no swap. Ideally, the kernel wouldn't need the
>insulation of the "right" shell, properly configured ulimits, and so on.

It doesn't. Use lshell and it'll work for all shells.

Russell Coker