On Thursday 07 October 2004 20:13, Martin J. Bligh wrote:
It all just seems like a lot of complexity for a fairly obscure set of
requirements for a very limited group of users, to be honest. Some bits
(eg partitioning system resources hard in exclusive sets) would seem likely
to be used by a much broader audience, and thus are rather more attractive.
May I translate the first sentence to: the requirements and usage
models described by Paul (SGI), Simon (Bull) and myself (NEC) are
"fairly obscure" and the group of users addressed (those mainly
running high performance computing (AKA HPC) applications) is "very
limited"? If this is what you want to say then it's you whose view is
very limited. Maybe I'm wrong with what you really wanted to say but I
remember similar arguing from your side when discussing benchmark
results in the context of the node affine scheduler.
This "very limited group of users" (small part of them listed in
www.top500.org) is who drives computer technology, processor design,
network interconnect technology forward since the 1950s. Their
requirements on the operating system are rather limited and that might
be the reason why kernel developers tend to ignore them. All that
counts for HPC is measured in GigaFLOPS or TeraFLOPS, not in elapsed
seconds for a kernel compile, AIM-7, Spec-SDET or Javabench. The way
of using these machines IS different from what YOU experience in day
by day work and Linux is not yet where it should be (though getting
close). Paul's endurance in this thread is certainly influenced by the
perspective of having to support soon a 20x512 CPU NUMA cluster at
NASA...
As a side note: put in the right context your statement on fairly
obscure requirements for a very limited group of users is a marketing
argument ... against IBM.
Thanks ;-)
Erich