Posted
by
timothy
on Tuesday September 16, 2008 @11:47AM
from the sick-of-so-called-supercomputers-running-linux dept.

fetusbear writes with a ZDNet story that says "'Microsoft and Cray are set to unveil on September 16 the Cray CX1, a compact supercomputer running Windows HPC Server 2008. The pair is expected to tout the new offering as "the most affordable supercomputer Cray has ever offered," with pricing starting at $25,000.' Although this would be the lowest cost hardware ever offered by Cray, it would also be the most expensive desktop ever offered by Microsoft."

When has MS ever seen extra capacity and said to themselves that those cycles belong to the customer?
Like the linux kernel developers are any better...every OS maker is greedy about increased CPU power. I first ran Linux in 1995 and it isn't that much faster now.

I just use WindowMaker as my desktop and turn off all the services I don't want. Its quite fast for me.

The quality of discussion is no longer assisted by the moderation system because it has been subject to gaming. Humorous comments should be highlighted for enjoyment, but that's what the "funny" mod is for--it doesn't mean the poster has anything valuable to contribute, which is what the mod points are for. It doesn't mean the poster is considered to provide constructive, valuable comments as a general rule, which is what karma is for.

Like the linux kernel developers are any better...every OS maker is greedy about increased CPU power. I first ran Linux in 1995 and it isn't that much faster now.

Given that the Linux kernel is used in embedded systems with a tiny fraction of your desktop's RAM and CPU power, I'd call it pretty darned safe that the kernel isn't your problem. It's gotten somewhat bigger -- which is why 2.2 and 2.4 kernels are still in use in smaller environments -- but on any system with over 100MB of RAM, you're not going to notice.

Now, if you want to complain about application developers taking advantage of hardware resources (inclusive of the GNOME and KDE folks, browser developers, and the like), feel free.

When I upgraded from 2.4.24 to one of the early 2.6 releases I was astounded at how much faster things felt. On a very modest laptop (1.3 GHz Pentium-M, 512M RAM, 30G 5400 RPM hard drive) from a fresh boot I fired up OpenOffice, Konqueror, Eclipse, Firefox (might have still been Mozilla then, I forget) all at the same time, and the desktop was still liquid smooth and completely responsive. Needless to say, a similar task on 2.4 felt much slower, as actually getting the K menu to open again so I could select another program to start out of it took longer.

Newer kernels are actually faster in a lot of cases too, particularly with scalability, but lots of other optimizations have been done as well, as many kernel developers keep a very close eye on performance. Also, GCC has gotten better over time, and likely optimizes the kernel quite a bit better now than it could several years ago.

Funny enough, GCC has become orders of magnitude slower at compiling because it now supports much more sophisticated optimisations. Hardware has moved forward faster than GCC though, so they're well in the green.

There was a Microsoft podcast, where some Microsoft programmers were being asked about the future of the API they developed and one thought was that every DCOM/COM/kernel object would have its own lock, as the attitude was "Hey, you will have 80 cores on every machine, you will be able to afford it!".

If you want to cut off the air that Linux breathes, as Microsoft certainly does, one of the choke points where you try to get your Windows tentacles wrapped around is supercomputing, or what people for some reason now call high performance computing. But to take on Linux in HPC requires a slightly different tack than what worked for Windows in the data center, and it requires something a little more subtle than the cheap software and portability across architectures that made Linux the darling of academic, government, and corporate supercomputing centers in a mere decade, supplanting Unix.

Microsoft's strategy - one that no supercomputer maker and no X64 chip maker can ignore - is to attack from the bottom, to find those myriad new HPC users who never learned Unix, never learned Linux, and have no desire to. This strategy is what moved Windows from the desktop to the data center in the 1990s, and it worked so brilliantly that Windows machines account for more than two-thirds of server revenues each quarter and the lion's share of shipments. People use the software they are comfortable with, and Linux was an easy transition for Unix shops, just as moving from a Windows desktop to Windows servers is relatively simple.

So Cray is trying to democratize the supercomputer-- just as DEC democratized the mainframe.

Man, now even with buying a supercomputer we have to pay the Microsoft tax. We should sign a petition for them to sell the computers with Linux on them. Then we can drop the price to $24,900. That's WAY better.

Oh please. This really isn't "news for nerds". Maybe news for fools, but all of us here have known for months that this would be coming. I mean, what else can you imagine that would run Vista smoothly?

If they're running their shopping cart on it. I just tried to configure one and got the following error. I mean, honestly, what has happened to Cray if they're releasing applications that don't handle simple CRUD exceptions? This would earn an F in high school level computer science and released into production should be enough to tank their stock:

Server Error in '/configurator' Application.

An item with the same key has already been added.Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.

Stack Trace:

[ArgumentException: An item with the same key has already been added.]...

...is not actually a "desktop". It's not even "a" computer. It's a cluster, and Cray could definately do better than this. Especially considering Unisys has built computers (no, not clusters) with a lot of processors a long time, many of them Windows Capable.
So...
Cray builds a cluster, Microsoft gets some free ad space for HPC Server. Hooray!

Are you trying to imply that Cray the company is "in name only?" Because that's not at all the case.

It's true that Cray was a shadow of its former self after Tera bought it, but many of the Tera executives have left, and some of what Cray Research used to be has re-emerged.

Now, the CX1 really is Cray in name only. Don't make the mistake of thinking of Cray as a maker of itty bitty clusters. Oak Ridge has a >30,000 core Cray XT4 [nccs.gov], NERSC has an almost 20,000-core XT4 [nersc.gov], and of course Red Storm [sandia.gov] has over 26,000

Because by saying it runs Windows, they are implicitly defining the development tools and APIs that it supports.

So an organization that has Windows devs but needs more horsepower is likely to turn to this before looking at a Beowolf cluster.

Now, writing massively parallel code is admittedly a different skill set than writing ordinary desktop or web development, but starting with the same tools and environments gives them at least a head start.

From my experience programming properly threaded daemons on Linux and Windows, a head start in development on Windows can't even begin to make up for the extremely broken APIs available there. Even condition variables have to be hacked together, since Windows doesn't support the POSIX threading standards.

It's also highly inaccurate by claiming this "most expensive desktop", but not only because it isn't a desktop.Try looking up the pricing for a Itanium-based HP Superdome with Windows Server 2003 Datacenter Edition. I believe a 32-way system went for around $700,000, while a 128-way system cost several million. $25,000 is pocket change compared to those.

Also, didn't some of the SGI NT boxes sell for around that price ($25k)? Those were true workstations, though, and not servers.

For many of us coders, geeks and otherwise technically inclined here on Slashdot, this issue is one where for some, it is an emotional outlet, where few others exist. Others have issues pertaining to Sex, Families, LIFE, and other things to massage our emotional minds over.

To many of us, Microsoft represents something we love to hate, because we can.
There is a disconnect between what works in technology, and what works in business. Many of us downplay the importance of Marketing, Leverage, Tie-in, Competition Analysis, and other stuff you don't learn in your CS program, but only in Business school.

We have a hard time seeing Microsoft as a business, responsible to its shareholders above all else, we embrace those orgs who see themselves as some kind of technical crusader, ready to right the wrongs in our industry, using truth, justice, and the American way.

It is the rare geek who can get beyond the technical arguments and embrace the quite logical reasons for why Microsoft has so much marketshare today. The concept of "Barriers to Entry" is rarely discussed when pushing an alternative to MS Office, Exchange Server, or other Microsoft tools.

Instead, we choose to blame the stupid CIO, who in a moment of insanity, decides to go with the Microsoft solution, like 90% of his peers, when he could be that brave, intrepid warrior for good, by going with Linux Servers, Open Office and more.

I mean, who actually uses those integrated Calendar/Scheduling thingies anyway, dammit? If I want to book a conference room 2 weeks in advance, I'll hang a post-it note on the damn door! Easy, and I dont have to deal with integrity testing that blasted Exchange database!

You see, there is nobility in suffering.

If it takes me a week to get my DVD-RW to burn disks under Linux, who cares, if I am a better person for the effort?

It is simply a case of the quest for perfection acting as an enemy of the "good enough".

This is a highly simplistic argument, tonque in cheek, and all that, but true.

And, as always, I got karma to burn bitches, so if you disagree, give it your best shot!

If Cray would have spent the amount of time and money equivalent to what was put into this deal at their end by recoding FreeBSD to their needs, they could have rebranded the result as their own OS/hardware package a la Apple without all of the bugs and security holes that MS has brought to the table.

Cray have and have had their own custom UNIX distribution since before slashdot existed.

You can already get Linux on CRAY hardware -- the SGI Altix series. I haven't kept up on the offerings, but I beleive there are other *nix based offerings as well.

The value proposition of something like this is that people who are better at science than programming (you know, most super computer users) get something that makes them more productive than they'd otherwise be. The operating system on a super computer is al

The current "Cray" is actually a new company that used to be called Tera Computer. Their connection with the original Cray is that in 2000 they bought some SGI assets that originated with Cray Research. One suspects that the only asset they really wanted was the Cray name. Ironically, when SGI owned Cray, they tried to phase out the Cray brand — with disastrous results.

Unlike the original Cray Research, Tera/Cray has always been moderately profitable. So this is not a dying gasp by any means.

Cray is pretty much the Monster Cable of the supercomputing world these days, right? A company that offers little to no tangible benefit over its competitors, but gets by on brand recognition alone?

I know that Cray was at the top of the world twenty years ago, because that's what we were taught in 7th grade Computers class, where we learned how to program in BASIC on a room full of TRS-80s; that the four types of computer are microcomputer, minicomputer, mainframe, and supercomputer; and that other popular

Resistance is futile - you will be assimilated!
A final nail in the coffin for traditional Unix. Now, Microsoft scales from tiny devices running in watches, to super-computers!
Even changes to Windows 2008 servers allow administrators to run the OS on routers (without a UI, even solitare is removed).
The arguments for Unix in any data center are almost gone.

Cray is just barely more relevant to modern HPC than Silicon Graphics. Whether they're making a PC that runs Linux or a PC that runs Windows, it's still a PC. Yes, a massively parallel one, but it's a PC. The XMT series is the only really innovative thing that distinguishes Cray from the next guy down the street.

Computing has come to the point where commodity hardware can be almost endlessly strung together with commodity equipment to achieve the computing level necessary for most purposes. Furthermore, in the rare cases where it's necessary to go beyond this level, the cost of building a custom machine that outperforms commodity equipment is roughly one to two orders of magnitude more. Bottom line, it's just not cost effective for almost anyone to buy the cool high-end non-commodity gear anymore.

Which means that Cray will be reduced to a company that makes interconnects, like SGI is. Neat engineering, but the interconnects are now becoming commodity gear as well, which means that these companies won't be able to make enough profit to keep engineering as the focus of the company. They'll be forced into being a support/service company of their commodity hardware sold at a meagre 5% profit margin.

The one escape is gone as well--pushing Linux and Windows and the primary (or only) OSes means that they won't have anything special to offer. If, for instance, SGI had aggressively driven Irix, things might have been different for them.

The last front for development in current computing is in the labs of Intel and AMD, working on commodity gear. The days of boutique computing are dying.

You can have up to 8 "blades". each blade is a dual socket Xeon board with it's own RAm and graphics. The blades are in effect dual CPU Xeon PCs. The blades are connected to an high performance Ethernet switch which ties them together in a cluster.

So if you call eight PCs connected to a network a "supper computer" then this is it.

Vista jokes aside, this is an HPC/Server system, not a desktop. And as such, it's a long way from being the most expensive Windows system you can buy. A fully loaded Sun Fire X4600 M2 [sun.com] can run you more than $35K.

As someone who does science HPC for a living, I am confused. Who actually wants Windows for HPC? What value does it provide that Linux or UNIX doesn't? I've never heard of a single use case where Linux or some UNIX wasn't better by miles.

Microsoft, for one. As of three years ago, 60% of supercomputers were running Linux [forbes.com] and I can only imagine that figure has gotten higher subsequently. Nobody trusts Microsoft for high-end applications, and what's more, it's expensive, too. Microsoft needs a reference application to show its customers that they aren't being left in the penguin's dust.

Serving up one of the most crappy and broken corporate websites I've seen lately, Cray bedazzles me. They can't be serious, can they? Running a high throughput, custom piece of hardware on Windows as the prime OS?... Unbelievable.

What Oomph does this thing have anyway? 16 Quad-Core Xeons. 64GB per node. Doesn't sound like that much of a a big deal to me. What corners could Cray have cut with the system archiecture itself to justify the hype? Won't a smalish blade-box or something simular from Sun or IBM wipe the floor with this thing?... Just wondering.

If the Vista kernel was all that people were running, would people be so disappointed with Vista?

I would say no. Because there is a lot more expected of an OS than just the kernel, in most cases. And that is doubly true when the name of your operating system includes the word "windows", since then the operating system includes the kernel, the gui, and several other things that wouldn't be considered as integral to the operating system in other camps.

You have to realize that communication between nodes in a cluster of off the shelf PCs is going to be much slower than the inter-node communication channels used in a Cray.

Any work that requires a lot of communication will always run faster on a real supercomputer versus a cluster of PCs. There will always be a niche for Cray, but their prices will continue to go up as more and more of their repeat customers realize they don't really need what they're getting.

Um not always, Unless you have a Super Computer and a cluster of equal processors, But for many cases Clusters have more CPU's then super computers, not all algorithms can be run in Parallel. So say you have a low end super computer of 50 processors (I did have 32 but I upped it to 50 to keep the math neat), and a cluster of 100 Quad core PC's. Now the program can be broken up into 400 segments but each segment will take 30 minutes to run and 10 minutes to send data back and 20 minutes to process and glue t

I disagree, but then again, I work in the HPC industry.1. Standard computers have already taken over all of those jobs that used to require a supercomputer. There's no more market to loose. HPC is a 6-7 billion dollar market. The TAM is growing slower than the rest of the IT industry, but it's still a large niche market.

2. Clusters got really popular for a few years, but have really fallen out of favor at the high end of the HPC market. That said, the difference between a high-end super, and a cluster, is rather small. Thankfully the price difference is shrinking too. Moreover, this product IS a cluster. It looks like an attempt, by Cray, to get into the low end of the HPC market. Cray, like everyone else, would like to be the company taking market share away from itself, rather than let someone else take it.

3. IBM has a compelling strategy of reusing their high-end POWER-X processor super-servers, and selling them as supercomputers. The problem with this, is that they are obscenely expensive as supercomputers. A high-end database server has a whole pile of functionality that is completely unnecessary for HPC jobs, both in hardware, and in software. Big iron servers are also WAY more expensive, per-processor, than a super. As such, IBM is also making supers out of commodity clusers, commodity clusters with CELL coprocessors, and BlueGene, which is much closer to CrayXT than it is to an IBM mainframe or superserver. I would argue that IBM's diversity may work against it, in the HPC market, as it tries to fit a round peg into a square hole.

I'm not sure Cray will be very successful with this CX1 product, or generally, selling to the low-end HPC market. That, however, is not reason to believe that there is no need for venders specialized in HPC systems. Cray has made quite a comeback, in the last few years. The reason one thinks of Cray as a dinosaur, is that the HPC market is so much smaller now, relative to the entire IT industry, compared to the 1980s. Nonetheless, it's still an important niche.

Yeah. I think "ummm? WTF?" pretty much says it all. This is the most breathtakingly silly idea I've heard in a long time. If they haven't picked a model name for this beast yet, I nominate "OMG PONIES!!".

Surprisingly enough, people choose Windows for reasons other than legacy. Maybe they have a lot of knowledgeable Windows developers, or the company has some stupid policy about which OSes you can use, or maybe they actually prefer to work with Windows.