Re: FTP benchmark proposal

David Luyer (luyer@ucs.uwa.edu.au)
Mon, 28 Jun 1999 16:42:59 +0800


> Larry McVoy wrote:
> > So I wouldn't worry too much about making the test realistic. if you
> > can set up a work load that has 6000 sockets going at the server in
> > parallel, I suspect that it will stress the server just fine. It's when
> > you try to do the load through a few sockets that all the timing enters
> > into the equation. Yeah, I'm sure the tests will have to be played
> > with a bit, but the first step is to just do it and see what we get.
> > I'll call VA tomorrow and see if they are interested. Red Hat is also
> > setting up such a lab on the East Coast.
>
> of course, with 6000 connections you can't just use one ftp daemon
> per connection.

I thought the 4192 process limit was gone with the latest kernels?

Granted, you'd probably want to be using an ftp daemon which is quite efficient
(eg, internalizes 'ls'), and probably want to run a front-end cache which
serves multiple connections if possible, but doing it with 6000 processes
would be an interesting test of how Linux could handle such things. (hmm,
hopefully an ftp daemon which doesn't touch too much RAM in every process
too...)

David.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/