Re: Lets get this right (WAS RE:MOSIX and kernel mods)

Jeff Millar (jeff@wa1hco.mv.com)
Sun, 07 Mar 1999 11:26:09 -0500


At 04:55 PM 3/7/99 +1100, Richard Gooch wrote:
>Michael Loftis writes:
>> > Within the next year, we'll have networked computer chips that
>> > use 1-4 Gbit per second serial links between them. Imagine
>> > a SIMM/DIMM kind of thing holding 8 CPU chips each with 64 MB on die
>> > ...plug in as many as you like on your motherboard. The interconnect
>> > protocol uses DSM so it looks like an SMP.
>>
>> Which is exactly why itd be good to start work now. Nearby is a lab
>> w/ 100Mbit ethernet and Giga-Ethernet. Both are very fast, and
>> aside from using a Paragon backplane, the Giga is about as fast as
>> it gets. This is *now* in 5 years Giga Ether will be consumer
>> product. Imagine a 16-node Dual P-II Linux cluster on a switched
>> full-duplex gigabit network... Right now pretty spendy (if you
>> forgive the fact that there isn't free clustering in the kernel) but
>> it *is* possible. Flash forward five years when the P-II can be
>> bought at a garage sale and a Gigabit Ether switch would cost $50.
>
>OK, let's look at some numbers and compare. 100 Mb/s EtherNet is now a
>commidity item: Gb/s is still experimental. So we have networks with
>10 MB/s bandwidth and 2 millisecond latencies.
>
>Now let's look at the bandwidth of a modern, commidity computer, say
>PII. With 100 MHz SDRAM, a 4-1-1-1 burst gives you 8 bytes per step;
>with 4 steps that's 32 bytes for the burst. It takes 7 cycles. Let's
>say 8 to make the maths simpler. So 8 cycles gives 32 bytes. That's 4
>bytes per cycle: 400 MBytes/s. The latency is going to be well under
>200 nanoseconds.
>
>So: we have a factor of 40 in bandwidth and 10000 in latency. And this
>comparison is only done with the boring 64 bit bus in your average PC.
>It gets worse if you look at a 256 bit bus: bandwidth goes to 1.6 GB/s,
>which is 160 times better than current EtherNet.

I agree with this analysis. But, if we compare networking latencies to
disk latencies they look quite similar. We find it quite usefull to
implement virtual memory and mmap'd files and leave the details of
access to the kernel. Programmers have the option to manage memory
or file access "better" on their own, and sometimes they do, but we
still find that doing it as a service in the kernel very useful and
productive.

SMB kernel programming and threaded or SMP ready applications will
improve significantly over the next five years because CPU designers
will implement SMP on a chip...because it simpler and more effective
than pushing the multiscalar/superscalar envelope any more.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/