Re: 1024-way SMP?! (was Re: linux-kernel-digest V1 #2914)

Paul Barton-Davis (pbd@Op.Net)
Wed, 02 Dec 1998 12:38:05 -0500

>SMP is an architecture.

I don't completely agree. SMP is a decision made by software about how
to execute on a given architecture. You could take any of the current
batch of low-order SMP machines (e.g. P-II, Xeon, Origin) and run them
in a master/slave(s) configuration for example. The "architecture"
hasn't altered. It is true that various architectures will make SMP
easy, hard or impossible. But given an architecture, there are always
choices concerning what you do with it.

Also, the classification "SMP" has a point of view. It might be the
point of view of an application, of the user/kernel boundary, the
kernel(s) itself, the memory architecture, the system bus. A system
could be classified as SMP at one or more of those levels, and yet not
be SMP viewed on others.

If you're doing something like this, it may be
>SSI (single system image), so that it looks like SMP to a programmer,
>but it's still not SMP. It is, in fact, clustering.

Well, there's a hard-to-draw line between "clustering" and NUMA. I
tend to draw that line at least at the point where the individual
processors are in the same box, on the same power supply, and using
shared resources other than their memory and inter-processor interrupt
lines. Beowolf is clearly clustering, Alewife, the KSR and similar
machines are, to me, clearly NUMA with/without SSI.

>Hence the original objection that 1024-way SMP is _not_ the way to go!

Hence my feeling that, since SMP is really a software issue (albeit
badly or well supported by hardware), it can be the way to go.

>And yes, 1024-way NUMA SSI clustering would be wonderful...

Can I call it SMP, just so we don't miss any acronyms ? :)
Personally, I still really like the Tera, although whether the
government is still pouring a $1M or so a month down their particular
drain, I don't know.



To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
Please read the FAQ at