Re: Remote fork() and Parallel programming

Martin Konold (konold@alpha.tat.physik.uni-tuebingen.de)
Tue, 16 Jun 1998 09:44:02 +0200 (CEST)


On Mon, 15 Jun 1998, Larry McVoy wrote:

> themselves not only with the parallelization and synchronization, but
> also with the real location of the memory. Right? So where's that
> ease of use now?

One side communication that works!

> Here's another data point: scientific programmers on SMP machines
> frequently use MPI instead of shared memory. The programming model
> is simple, fast, and it works. Doesn't that seem completely crazy?
> Use message passing on a SMP? I wonder why they do that.

Because they need portable programs. SHMEM implementations do tend to be
less portable than MPI stuff. Actually it is much easier to program and
debug shmem programs in scientific areas than any message passing.
Scientific problems are easier to be solved on DSM machines due to the
fact that most often it is also the increase in global memory which does
help to solve the problems. e.g. a Cray T3E does have 64GB global memory.

I am for example really_ considering to set up a 64node SCI based Linux
cluster...

MPP is not only more CPU but also more memory for your scientific
application.

Yours,
-- martin

// Martin Konold, Herrenbergerstr. 14, 72070 Tuebingen, Germany //
// Email: konold@kde.org //
RMSisch ist schlimmer als GNUisch (die fanatisch/religioese
Steigerung von GNUisch, daher ist GNUisch "nur" RMSisch-- ;-)
-- Harald Koenig --

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu