> : I realized that I might someday want to run my app on a 100 node cluster
> : insted of just my SMP system... MPI seemed to be a better (performance
> : wise) solution to cluster programming, although it did require non-trivial
> : effort to program using it at first.
> Agreed on both points; though I claim that it is good for your brain to fit
> your computation into messages, it tends to make some other nice things
> happen in the design of your code.
Yes, I can agree with that. I gained a deeper understanding of what data
is needed where by using MPI insted of DSM.
> : It seemed wiser to write my software for MPI and deal with the
> : difficulties and it being non-optimum on my smp system. (Although I've
> : never tested it, I'm sure that shared memory on a smp system is *MUCH*
> : faster then MPI)...
> Actually, only if you don't do anything to the MPI libraries. I.e., they
> are doing networking through the loopback device.
Yes. Though I seem to recall that the Linux loopback device does jump over
a fair portion of the stack.
> SGI took the libs, gutted 'em, leaving just the interfaces, and tuned them
> especially for SMP machines (yeah, they can still call out to the networking
> ones when they need to).
I posted yesturday to your cluster list about doing this on Linux.. I
would very much like to see this.. Perhaps it would be wise to first
optimize MPI for SMP systems, then optimize MPI for Linux clusters.
> They even did some VM hacking such that they could map another process'
> address space so that a send() turned into
> find the process associated with the destination
> make we've already mapped the destination
> It was damn close to no more than the bcopy() cost.
Sounds good here..
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to firstname.lastname@example.org