It doesn't matter what you use, as long as communication with remote
nodes is explicitly "hard", in the sense that you know that such
communications aren't going to be fast. The programmer shouldn't be
encouraged to "quickly grab a lock and fiddle some shared data", when
said lock is a network object and even worse the data is also a
network object. Such an approach is foolish, because grabbing the lock
(let along moving "shared" memory around) cannot be a fast operation.
Whether or not you want to treat local CPUs the same as you do for
remote nodes is of secondary importance, although I'd argue against a
model where you're forced to use message passing for an SMP machine.
That's not making as good use of the resources as possible.
An optimal application will make use of both threads (for the CPUs on
an SMP machine) and message passing (for remote nodes), and will do so
explicitly.
Using DIPC or MOSIX is like using the "-parallel" switch on a
compiler. Sure, you'll usually get a performance boost (but sometimes
performance will drop through 13 floors), but it's not going to be
anywhere near the performance if you make the parallelism explicit.
Regards,
Richard....
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/