On Thursday 07 October 2004 20:13, Martin J. Bligh wrote:> It all just seems like a lot of complexity for a fairly obscure set of> requirements for a very limited group of users, to be honest. Some bits> (eg partitioning system resources hard in exclusive sets) would seem likely> to be used by a much broader audience, and thus are rather more attractive.

May I translate the first sentence to: the requirements and usagemodels described by Paul (SGI), Simon (Bull) and myself (NEC) are"fairly obscure" and the group of users addressed (those mainlyrunning high performance computing (AKA HPC) applications) is "verylimited"? If this is what you want to say then it's you whose view isvery limited. Maybe I'm wrong with what you really wanted to say but Iremember similar arguing from your side when discussing benchmarkresults in the context of the node affine scheduler.

This "very limited group of users" (small part of them listed inwww.top500.org) is who drives computer technology, processor design,network interconnect technology forward since the 1950s. Theirrequirements on the operating system are rather limited and that mightbe the reason why kernel developers tend to ignore them. All thatcounts for HPC is measured in GigaFLOPS or TeraFLOPS, not in elapsedseconds for a kernel compile, AIM-7, Spec-SDET or Javabench. The wayof using these machines IS different from what YOU experience in dayby day work and Linux is not yet where it should be (though gettingclose). Paul's endurance in this thread is certainly influenced by theperspective of having to support soon a 20x512 CPU NUMA cluster atNASA...

As a side note: put in the right context your statement on fairlyobscure requirements for a very limited group of users is a marketingargument ... against IBM.