This completely misses the point. The point is *not* that in 10 years
we're going to have Tb/s bandwidths and microsecond latencies for
networks, and thus that those networks will be "fast enough". The
point is that in 10 years networks are going to still be a lot slower
than something inside a computer.
No matter how fast networking technology becomes, it's never going to
be negligible, *because it's always going to be orders of magnitude
slower than a computer*. So it's *always going to be faster to use
SHM locks than DSM locks*. The problem space will expand to fill the
available hardware.
My recent goal was sub-second rendering times on a modest dataset (13
MBytes) using a cluster of machines ranging from dual P133 to dual PII
400. That goal was achieved and I'm now looking to rendering times
under 100 milliseconds.
If you hand me a network with computers and network orders of
magnitude faster, do you think I'm going to keep doing the same thing?
No way! I'm going to want render times under 30 milliseconds and
datasets of a few hundred MBytes. I'm still going to care that
grabbing a remote lock is going to be 4+ orders of magnitude more
expensive than grabbing a local lock. So I'm still going to do SMP for
the local nodes and MPI for the remote nodes, because I need to think
about the two cases in a different way. I use locks because they're
fast and there are benefits to using them and they're neater; but when
locks would be slow, I do things a different way, and there the locks
just get in the way: MPI is neater.
Different problems require different solutions. Even in the same
programme. One size does not fit all.
> Once you get past a certain speed the human cannot percieve the
> difference. Cumulatively, yes, but instant-to-instant, no.
> Starfire servers have a 'low-latency' backplane in the 500ns range.
> .5ms.... Ever ping a system across an Giga ether switch? Sub 1ms
> unless it's loaded pretty heavy.
Humans perceive the difference, because they throw bigger problems at
the hardware. The bigger problem magnifies these so-called "really
fast overheads".
> Acceptable, yes.
No way! I'll not throw away 4 orders of magnitude in performance no
matter how fast the hardware is. If ever the hardware is much faster
than I need for any of my computational problems, I'll start running a
Copy[1] of myself on it. And if the hardware is still mostly idling,
I'll migrate to it[2].
No such thing as "fast enough".
1: "Permutation City", Greg Egan (a good read)
2: "Diaspora", Greg Egan. (even better)
Regards,
Richard....
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/