Re: NADS for Linux

Janos Farkas (Janos.Farkas-nouce/priv-#KwaKDcMR5I1L2/XU1fd68/UF/ba@lk9qw.mail.eon.ml.org)
Sat, 24 Oct 1998 19:19:46 +0200


[Not really strictly NADS related, just a bit clarification of the notes
about the bridging subsystem... I hope you could use it to have NADS the
*right* pieces, not to complement the half-ready bridge code.]

On 1998-10-22 at 10:01:48, Mark Spencer wrote: [I wrote]
> > 2. ultimately, to use the linux server itself as a switch (via the
> Of course, this is already done, but be careful! Trying to use a
> linux box as a switch *and* a host causes some unique problems --
> exlpicitly that the linux host only sees the machines on one side of
> the network. Even in the Bridging FAQ, it explicitly discourages you
> from doing so.

Sure, but it does not need to be that way. It is simply because the
current bridging code is not a "first class" citizen, but that's really
just an implementation detail. The current bridge is just doing the
work "behind the back" of the networking code, so a packet is either
handled by the bridge, or via the normal interfaces, which causes the
confusing artifact mentioned above. In the sanest implementation, the
bridge would be implemented just as a virtual interface, to which you
would attach physical interfaces, and you'd be assigning IP addresses to
the bridge interface, not to the physical ones. Then, there would be no
more unintuitive problems regarding where are the nodes compared to your
interfaces.

It's not even a difficult task to implement this, just a bit code
shuffling, but I have really too little time in these months.
Nevertheless, even if noone beats me to it, you could expect it to be
done sometimes, maybe in a year at the very worst, but it could happen
the next week if everything goes best.

> > In a real switch, even the send is being done concurrently while the
> > rest of the packet is still coming in, and that sounds a nightmare to
> Well, it would seem to me that you could use the "fast routing" code
> to implement this right? Surely it's easier to bridge than to route
> between two interfaces! I don't know that it would be concurrently
> sending, but it could be pretty dang close to that kind of
> performance!

As far as I can see, the fast routing code, as well as the hypothetical
fast bridging as an extension, is only being able to speed up operation
by overlapping the computing of destination path with receiving the rest
of the packet; in the best case, this only permits to "swallow" the time
of the routing decision code, which is good, but receiving ~1500 bytes
over 10 megabit networks is still a bit more than 1 ms, and that's what
a Linux "switch" (or router, by the way) would be delaying every single
packet.

In a perfectly designed environment, the decision time would be very
small, (with hw help, by making much use of caching, similarly to the
current Linux code; the simplest cases might really be implemented in
hw), and the transmission would be able to start just a little bit after
the headers were received, that's at most a few percent of the above
millisecond. But, I'm not sure by requiring this level of sophisticated
timing, and intimateness to a specific ethernet hw it's a feasible goal
for Linux to achieve this. :)

-- 
Janos - Don't worry, my address is real.  I'm just bored of spam.

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/