Re: FTP benchmark proposal

david parsons (o.r.c@p.e.l.l.p.o.r.t.l.a.n.d.o.r.u.s)
28 Jun 1999 12:55:56 -0700


In article <linux.kernel.199906272035.NAA00807@lm.bitmover.com>,
Larry McVoy <lm@bitmover.com> wrote:
>Folks, I was reading the press release at
>
> http://www.cdrom.com/press/wcarchive_milestone.phtml
>
>did the math, and came up with the rather impressive number of 16.8MB/sec
>sustained over a 24 hour period. That's really bloody amazing. I have
>to hand to the FreeBSD guys, that's an impressive number. I'm not
>positive that Linux can do that but I sure as heck want to find out.
>
>My understanding is that they limit the load to 6000 users, so you can
>work forward to see what the load per user is. I also understand that
>this is all going through a single Gbit ethernet interface.
>
>The proposal is this:
>
> a) get VA or Penguin to sponsor the hardware and the test lab,
> I'll talk to them and try and make that happen.
> b) create a test lab with enough clients to generate the load
> with a setup like so:
>
> [ server ]
> |
> gbit
> |
> [ router ]
> / / \ \
> / / \ \
> / / \ \
> c c ... c c

The expensive part of this is the GB router.

I did a smaller version of this when I was working with McAfee's
Webshield (where I could sustain close to wire speed for as long as
it took to find holes in the webshield tcp stack):

[server(s)]
|
(100bt)
|
[poor helpless 2.0.28+webshield box]
|
(100bt)
|
[switch]
||||||||
5 machines
going to town.

The clients each ran 20x copies of little perl scripts that
simply ftp'ed parts of the kernel across and dropped it into
/dev/null.

(Modulo the gb connection, I could do that now, but I'd probably
burn town my house with the electrical demand)

____
david parsons \bi/ Old house. Cloth insulation.
\/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/