But in an important sense, that *is* the proper behavior of a bridge. I
would certainly want a few second opinions before I started trying to get
the bridge code to look into the headers and such. But still, you're
missing the point of our project. NADS shouldn't be tied to the bridging
code at all. If you have high peer-to-peer traffic in addition to the
client-server traffic, then you really ought to have an external bridge or
switch or whatever (as is diagrammed on our web site). NADS should work
independently of the bridging code, not as an extention or piece of it.
The hope is only that it will co-operate with it to allow the server to be
a server and bridge if that happens to be the way you want to run it.
> As far as I can see, the fast routing code, as well as the hypothetical
> fast bridging as an extension, is only being able to speed up operation
> by overlapping the computing of destination path with receiving the rest
> of the packet; in the best case, this only permits to "swallow" the time
> of the routing decision code, which is good, but receiving ~1500 bytes
> over 10 megabit networks is still a bit more than 1 ms, and that's what
> a Linux "switch" (or router, by the way) would be delaying every single
> packet.
I thought that it bypassed the CPU for sending most of the payload by
doing card->card transfers. Do those work in a store-and-forward sense?
Mark
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/