Re: networking / web perf probs

Alan Cox (
Sun, 14 Dec 1997 12:18:39 +0000 (GMT)

> The problem is that when a web server has more than a certain number
> of packets in the input queue, new packets will just get dropped.

Oh thats only the toy problem for beginners, the real ones are far more

> The simple fix is to crank up the input queue. SGI cranked theirs
> to 512 packets per queue (and there is a queue per CPU). DEC cranked
> theirs as well (anyone have OSF/1 header files out there to figure out
> how high it is?).

That is a fatally bad unless you fix about 3 other things. In fact naively
cranking the queue up makes your machine extremely vulnerable to sophisticated
syn bomb variants. The classic that takes out sunos 4.x machines that have
simply had the queue upped is a syn bomb attack passing syn + 63K of data
as one IP datagram for each faked SYN. It fills the mbufs on the sunos box
in seconds. The entire networking on the machine is history shortly afterwards.

If you do deep queues and only store limited connection state then this is
a good beginning point. You then hit the time wait and port problems. The
time wait one can be handled with seperate queues. Certain vendors use a
"sod it and pray" approach which is lamentable as it can cause corruption
in future connections.

The port one is the biggy. There are only 2^16 tcp ports per host to connect
to the same end point.

> listen(sock, >0)
> should be changed (in the kernel) to be something like
> listen(sock, sizeof(input queue length))

That sort of breaks POSIX 1003.1g draft 6.4 [ sort of because nobody follows
the queue length accurately anyway] . It also breaks some programs that
use the listen queue length for flow regulation. No big deal. See below..

> There are a lot of leftover programs that think a back log of 5 is
> reasonable. Those programs are naive.

and in the happy Linux world can be recompiled. Having every app with
a huge backlog is bad. Also with syn cookies backlog is a bit meaningless