blades – HPCwirehttps://www.hpcwire.com
Since 1987 - Covering the Fastest Computers in the World and the People Who Run ThemThu, 17 Aug 2017 22:33:47 +0000en-UShourly1https://wordpress.org/?v=4.8.160365857Assembling Blue Watershttps://www.hpcwire.com/2013/03/20/assembling_blue_waters/?utm_source=rss&utm_medium=rss&utm_campaign=assembling_blue_waters
https://www.hpcwire.com/2013/03/20/assembling_blue_waters/#respondWed, 20 Mar 2013 07:00:00 +0000http://www.hpcwire.com/?p=4143As NCSA's Blue Waters supercomputer approaches full service status, we thought it would be appropriate to see how the machine was built.

]]>As NCSA’s Blue Waters supercomputer approaches full service status, we thought it would be appropriate to see how the machine was built.

This video from the National Center for Supercomputing Applications (NCSA) takes viewers inside the hallowed hallways of Cray’s manufacturing center in Chippewa Falls, Wisconsin, for a look at the primary components that make up a Cray G34 compute blade assembly for the XE6 computer system.

The Director of Manufacture Logistics Group at Cray, Steve Samse, shows off the compute blades that include Blue Waters’ processors, interconnect and memory: “the heart of what will be one of the most powerful supercomputers in the world.”

In the year since this video was filmed, work on the system was completed and Blue Waters was installed at NCSA. The 11.6 petaflops (peak) supercomputer contains 237 XE cabinets, each with 24 blade assemblies, and 32 cabinets of the Cray XK6 supercomputer with NVIDIA Tesla GPU computing capability.

Currently available in “friendly-user” mode for NCSA-approved teams, Blue Waters provides sustained performance of 1 petaflop or more on a range of real-world science and engineering applications.

]]>https://www.hpcwire.com/2013/03/20/assembling_blue_waters/feed/04143Dell Revs Up HPC Strategy with New Products and Market Focushttps://www.hpcwire.com/2010/09/09/dell_revs_up_hpc_strategy_with_new_products_and_market_focus/?utm_source=rss&utm_medium=rss&utm_campaign=dell_revs_up_hpc_strategy_with_new_products_and_market_focus
https://www.hpcwire.com/2010/09/09/dell_revs_up_hpc_strategy_with_new_products_and_market_focus/#respondThu, 09 Sep 2010 07:00:00 +0000http://www.hpcwire.com/?p=5107In the HPC market, Dell has established itself as the number three system vendor, trailing only its larger competitors, HP and IBM. Known for offering no-frills performance servers at reasonable prices, Dell has garnered a particularly strong following in higher education and government labs, especially for small and mid-sized clusters. But a recent spate of purpose-built HPC products from the company point to a subtle shift in Dell's high performance computing strategy.

]]>In the HPC market, Dell has established itself as the number three system vendor, trailing only its larger competitors, HP and IBM. Known for offering no-frills performance servers at reasonable prices, Dell has garnered a particularly strong following in higher education and government labs, especially for small and mid-sized clusters. But a recent spate of purpose-built HPC products from the company points to a subtle shift in Dell’s high performance computing strategy.

During a recent conversation with Donnie Bell, senior manager of HPC Solutions in the Dell Product Group, and Tim Carroll, Dell’s HPC Global Lead, the two reps outlined how the company is treating HPC more as a distinct opportunity, and less like an extension of their enterprise business. The result is that Dell has developed more HPC-specific products and is backing that up with more system testing and validation prior to deployment. “It’s not just about throwing gear out there,” explained Bell. “It’s got to be the gear that they want, put together in the solution they want.”

The shift in strategy has come about over the last three years. Attracted by the bullish HPC market (or at least bullish forecasts thereof) and a seemingly untapped demand for high performance computing, Dell is focusing particularly on the so-called “missing middle,” a term the Council on Competitiveness came up with to identify the potentially large group of unserved users between entry-level and high-end HPC practitioners. “That’s the market that Michael [Dell] said we’re going to invest in,” said Bell.

Of course, what this class of users ultimately wants are turnkey systems that are as easy to use as their desktop systems and don’t require an advanced degree in high performance computing in order to maintain. So far this is beyond the reach of Dell, as well as any of its competitors. Making HPC clusters act like appliances is still the stuff of fantasy.

Where Dell is staking out new ground is in its product mix, which now includes a range of HPC-centric offerings. It wasn’t too long ago that the PowerEdge 1950 was the workhorse server for Dell’s HPC customers. For all intents and purposes, though, the 1950 was an enterprise server pressed into HPC service by necessity. Today Dell offers servers and blades aimed specifically at the performance sector, including the latest HPC-friendly gear: the PowerEdge 6100, M610x, C410x.

The C6100 is the company’s new HPC workhorse, an ultra-dense rackmount server that encapsulates four dual-socket nodes in a 2U form factor. It offers twice the density of an average dual-socket server and is even 20 percent denser than blades. Dell accomplished this feat by sharing the internal infrastructure: power supply, fans and backplane. You can service the nodes individually, and the hard disk drives (either 2.5″ or 3.5″) are hot-pluggable.

The C6100 is available with either Intel “Nehalem” 5500 or “Westmere” 5600 processors. Outfitted with 6-core Westmere CPUs, a single 2U box will deliver 48 cores. Because of its density and power, it’s specifically targeted as a building block for HPC clusters, but can also be used for general Web and cloud installations, where maximum performance is a priority. The C6100 has been shipping since the spring.

Dell recently announced C6100 deployments at the University of Colorado and University of Kentucky. Both systems will be supporting a range of scientific research at those institutions, including climatology, genomics, energy studies, pharmaceutical design, and physics, among others. The Colorado system is big enough to warrant the number 31 spot on the TOP500 list.

The brand new PowerEdge C6105 is the AMD counterpart to the C6100, offering Opteron “Lisbon” 4000 series processors in the same dense 2U enclosure. The 4000 Opterons are the less performant, lower power siblings to the Opteron 6000 processors, so the C6105 is geared more toward the large-scale cloud and Web 2.0 deployments than strict HPC. Availability is still a couple of months away.

On the blade side, the dual-socket PowerEdge M610x is an M610 variant for HPC that includes two x16 PCIe Gen2 slots and two I/O mezzanine cards. (The M610, by the way, is the building block for the newly announced 300 teraflop Lonestar super at TACC.) The PCIe slots on the M610x lets you install a single NVIDIA Tesla (Fermi-class) GPU card, if you want to accelerate data-parallel workloads; or perhaps a Fusion-io ioDrive Duo, if you’re looking for ultra-fast storage. The two mezzanine slots makes dual-rail InfiniBand a possibility, but you also can slot in Ethernet, Fibre Channel, or whatever networking combo you might desire. Like the C6100, the M610x is available with quad-core Xeon 5500s or six-core Xeon 5600s.

Because of the extra connectivity options, the M610x is a full-height blade (unlike its half-height M610 sibling), but still fits neatly in Dell’s M1000e blade chassis. The new blade was announced in June and has been shipping for a couple of months.

If a single GPU per server isn’t enough, Dell is now offering the PowerEdge C410x, a CPU-less 3U box that can house up to 16 Tesla M2050 GPU modules. As of today, that represents the biggest commercial GPGPU box on the market. At the maximum 16-GPU configuration, the C410x can deliver 16.5 teraflops of raw performance.

Of course, tapping into that requires a CPU host, so the C410x conveniently allows connectivity for up to 8 servers. The idea here is to decouple the CPU and GPU so that a customer can mix and match the processor ratios as needed by the application. This could be especially useful in those cases that can take advantage of a high GPU:CPU ratio, like some seismic and physics codes, or where the work is such that the optimal processor ratio varies from one application to another.

If you’re getting the idea that Dell is a little GPU-happy these days, you’re right. According to Bell, the company believes a lot of their HPC customers will be opting for GPU acceleration now, as they chase ever denser performance. Even the new Dell Precision T7500 has a slot for a Tesla C2050 GPU, for those CUDA desktop apps that need a few hundred extra gigaflops to really shine.

“Quantitatively, there are so many more thousands of researchers doing their work on desktops,” said Dell’s Tim Carroll. “But it’s only a matter of time before those people are performing their research on a server somewhere, whether it’s their own, the institution’s, or in the cloud.”

Whether Dell’s new HPC investment yields big dividends is difficult to gauge. Because of the sharp downturn in the global economy in the last couple of years, IT spending has dipped considerably, although less so for HPC. According to Carroll, though, Dell’s HPC business is “seeing growth across the board,” adding that the market seems to be really breaking loose over the last three to four months.

The latest IDC numbers for 2009, which splits out HPC system revenue by vendor, has Dell with a 12.7 percent market share. That’s about half that of IBM’s share at 29.6 percent and HP’s at 28.2. But for mid-sized (departmental) systems, Dell is at 29.8 percent, edged out only by HP at 35.6 percent. That’s a good starting place, especially considering that the size of the HPC pie is forecast to start growing again now that the recession seems to be easing.

Despite the evolution in strategy, Dell still relies on partnerships with vendors like Platform Computing and Terascala to fill in the cluster management and HPC storage pieces of their solution, respectively. And even though the cluster maker is now designing purpose-built HPC systems, it is doing so to fulfill established market demand, rather than for the sake of invention. Contrast that with former HPC maker Sun Microsystems, and its enthusiasm for building exotic hardware, like 3,456-port InfiniBand switches and proximity communication chips.

Dell’s much more conservative innovation strategy is designed to serve the large sweet spot in the middle of the performance market, relying on the acceleration of HPC demand to drive revenue. According to Carroll, the company is still fundamentally about delivering open standards-based commodity clusters, adding, “we want HPC to be widespread and we want to be the ones who deliver that.”

]]>https://www.hpcwire.com/2010/09/09/dell_revs_up_hpc_strategy_with_new_products_and_market_focus/feed/05107Verari Reboot Paves Way for New HPC Strategyhttps://www.hpcwire.com/2010/01/21/verari_reboot_paves_way_for_new_hpc_strategy/?utm_source=rss&utm_medium=rss&utm_campaign=verari_reboot_paves_way_for_new_hpc_strategy
https://www.hpcwire.com/2010/01/21/verari_reboot_paves_way_for_new_hpc_strategy/#respondThu, 21 Jan 2010 08:00:00 +0000http://www.hpcwire.com/?p=5545New CEO takes company back to the future.

]]>The re-emergence of Verari Systems this week under new management and a new name marks the end of a difficult chapter for the beleaguered server maker. According to a press release posted on Verari’s Web site on Tuesday, company founder Dave Driggers, along with some unnamed investors, have acquired the assets of the business and have relaunched the company as Verari Technologies. As the former CTO — and previous to that, CEO — Driggers is now listed as the company’s new CEO and chairman.

If all that doesn’t seem like a big change to you, that’s because the new company will essentially be like the old one, technology-wise. The big change will come from a more refined customer strategy. The announcement mentions a new focus on blade-based HPC, modular containerized datacenters and blade-based storage. There is also a plan to get back into the consulting biz, with the idea of partnering with other companies to deliver customized solutions.

Verari is not talking a lot to the press these days, but a brief conversation with company spokesperson Mike LaPan did confirm that the new organization will indeed be refocusing on the HPC space. He forwarded me this statement from Dave Driggers:

“Moving forward, Verari Technologies is concentrating on rebuilding our relationships in the High Performance Computing space by focusing on consulting services that deliver advanced, customized solutions for the HPC market. This is where Verari came from, and this is what we are returning to. I am confident that in a short time, you’ll see that our servers, storage and networking solutions will be driving world-record HPC performance again while providing the technological foundation our HPC customers require.”

Verari’s renewed interest in HPC customers harkens back to the company’s roots when it was known as RackSaver, and later in the early days of Verari Systems. This is before it got the idea to go toe-to-toe with the likes of IBM, HP and Dell for big enterprise accounts. With its energy-efficient server blades and containers, Verari did manage some significant wins with companies like Morgan Stanley, AMD, Microsoft, Qualcomm, Lockheed Martin, Sony Imageworks, and others. But as competing server makers added their own energy-sipping blades and space-saving containers, Verari’s ability to compete against the Tier 1 OEMs on the basis of differentiated technology alone, diminished.

The strategy to resize their ambitions is probably a smart one, especially considering the realities of the server market today. One of the reasons we have Tier 2 and 3 OEMs is because the Tier 1 OEMs tend to ignore smaller and more specialized accounts. I’ve had a number of conversations with HPC customers looking to buy moderate-sized clusters, but who couldn’t even elicit a bid from IBM or HP, presumably because the sales organizations at these companies are geared toward the big deals. (The customers I’ve talked with characterized the encounters a bit more bluntly.) In any case, if there is room in the HPC server business these days, it’s likely to be for vendors willing to get more intimate with their customers.

It’s worth noting that server vendor Rackable Systems, who had a similar set of technologies to that of Verari, also found itself in a tough situation in 2009. Like Verari, Rackable’s early success with its power-efficient servers was compromised as the Tier 1 OEMs used their size and market reach to grab more of the big enterprise accounts. Unlike Verari, however, Rackable avoided facing bankruptcy and instead decided to expand its ambitions by acquiring SGI, the idea being to build a synergy between the HPC and enterprise offerings of the two companies. The Rackable/SGI and Verari strategies are not completely diametrical, but they are dissimilar enough to offer an interesting test case for the HPC business.

]]>The sinking economy has not been kind to US-based HPC vendors. SiCortex and Woven Systems have gone belly up, while SGI ended up merging with Rackable. Meanwhile, Sun Microsystems is slated to be acquired by Oracle later this year. All of these companies introduced legitimate innovation into HPC, but failed to make a go of it on their own.

As American HPC companies retrench, a new crop of European-based vendors is emerging. In our recent podcast from the International Supercomputing Conference in Hamburg, I mentioned three companies across the pond that have designed some rather interesting high-end HPC machines. Bull (France), T-Platforms (Russia), and Eurotech (Italy) all recently introduced new purpose-built HPC server platforms, although each vendor has taken a slightly different approach.

Bull, which made a relatively-recent entry into the HPC business, unveiled its new line of “Extreme Computing” (bullx) HPC systems just prior to ISC. As we reported in our original coverage, Bull added plenty of performance engineering to make sure these systems could scale out to petascale-sized systems. The system building block is a dual-socket Intel Nehalem EP blade with dual on-board QDR InfiniBand and an option for up to two GPUs for extra acceleration. A 7U chassis consists of 18 blades, with six chassis to a rack. A petaflop-sized system (sans GPUs) would consist of a mere 100 racks — about 10,000 blades.

At ISC, Bull also dropped hints of its upcoming bullx SMP platform due out next year. The company says this architecture will be able to achieve a peak petaflop with just 800 servers. Like the blades, the SMP server will also have an option for GPU acceleration. Bull is mum on the particulars of the SMP design, but a good guess is that it is going to use the upcoming Intel Nehalem EX chips. Nehalem EX can be scaled to eight sockets and up to 128 memory modules per server, which would deliver 64 cores (assuming the 8-core chip) per node and a ton of shared memory.

Also at ISC, T-Platforms was showcasing T-Blade 2, the company’s second generation blade offering. Based in Moscow, T-Platforms is a 7-year-old company, which employs around 140 people. According to the Supercomputers.ru Web project, T-Platforms has about a third of the Russian HPC market, edging out both HP and IBM. The company’s largest installed system is at Moscow State University, a 60 teraflop system based on T-Platform’s first-generation Intel Harpertown blades. A new 350 teraflop system will be installed at the university in October using the T-Blade 2 hardware. That system is scheduled to be upgraded to a 500 teraflop machine in early 2010.

Like the Bull blade, T-Blade 2 is also based on a dual-socket quad-core Nehalem, but in an even denser configuration. T-Platforms puts 32 dual-socket nodes in a 7U chassis, delivering 3 peak petaflops. And believe it or not, it’s all air-cooled. A heat sink spans each board from end to end to keep the whole thing from melting. The design uses 10 custom components, including the motherboard, memory modules, an InfiniBand switch board, and a management module, among others.

The management module is the secret sauce for the platform. It’s designed to elevate the architecture from that of typical commodity cluster to more of an MPP-like experience. The module supports a global barrier network that enables fast synchronization of jobs running on separate nodes, and a global interrupt network that reduces the influence of OS jitter by synchronizing process scheduling over the entire system. The company claims this capability allows systems to scale up to as many as 25,000 nodes.

T-Platforms also sells a line of Cell BE-based offerings (server, workstation, and two-node mini-cluster) using the latest PowerXCell 8i chip, along with a home-grown Cell compiler. And if you’re not into hardware, the company also offers an HPC on-demand service.

A somewhat similar offering to the Russian T-Blade 2 was also unveiled at ISC by Eurotech, a company based in Amaro, Italy. If you haven’t heard of Eurotech (and I hadn’t), its stated mission is to “integrate state-of-the-art computing and communication technologies into miniaturized and user-friendly solutions to improve everyday life, making it simpler, safer and more comfortable.” Up until now, the company has mostly been focused on embedded and wearable computing, but has dabbled in HPC from time to time. Check out the Eurotech Wikipedia entry for its unusual history.

As for the Eurotech super, which is named Aurora, we again find a custom-built, high-end cluster with a lot of cutting-edge technology. Like the Bull and T-Platforms offerings, Eurotech is using dual-socket Nehalem blades that can be aggregated into petascale-sized machines. The Aurora design is on par with T-Platforms for computational density, offering 3 peak teraflops per chassis.

But unlike the Russian super, the Aurora machine is water-cooled (must be the warmer Italian climate). Each blade comes with up to 160 GB of solid state disk storage for application I/O and checkpointing. There’s also something called a “programmable high performance accelerator” integrated onto the motherboard, but there’s no hint of what it actually is.

Again it looks like a lot of the innovation went into the network interconnect, which consists of a 60 Gbps 3D Torus integrated with a QDR InfiniBand network, and three synchronization networks. The 3D Torus comes with a programmable network processor if you desire more customized management of the system interconnect. If I were a network engineer, I could probably tell you what this all means, but I’ll have to leave it as an exercise for the reader.

Unfortunately, none of these interesting machines are going to be shipping into the North American market anytime soon. Bull and Eurotech will be focusing mostly on the Western European HPC market, while T-Platforms intends to concentrate on Russia and the Commonwealth of Independent States (CIS), i.e., the former republics of the Soviet Union. The Russians are also interested in finding a European partner to give them access to that lucrative market.

It’s gratifying to see HPC tech innovation occurring outside the US, especially in these economically-challenging times. It will be worth watching to see how these companies fare in their more regional markets, and if they are able to compete against global OEMs like IBM, HP, Dell, SGI and Cray. Stay tuned.

]]>https://www.hpcwire.com/2009/07/02/european_vendors_offer_home-grown_petascale_supers/feed/05905Sun Revamps HPC Offeringshttps://www.hpcwire.com/2009/04/14/sun_revamps_hpc_offerings/?utm_source=rss&utm_medium=rss&utm_campaign=sun_revamps_hpc_offerings
https://www.hpcwire.com/2009/04/14/sun_revamps_hpc_offerings/#respondTue, 14 Apr 2009 07:00:00 +0000http://www.hpcwire.com/?p=6092Even as analysts and customers wonder whether Sun Microsystems will continue to survive on its own -- or whether it wants to -- the company continues to push new products out the door. On Tuesday, at Sun's Parter Summit in Las Vegas, the company introduced a number of new offerings, mostly centered on the recently launched Nehalem EP chips.

]]>Even as analysts and customers wonder whether Sun Microsystems will continue to survive on its own — or whether it wants to — the company continues to push new products out the door. On Tuesday, at Sun’s Parter Summit in Las Vegas, the company introduced a number of new offerings, mostly centered on the recently launched Xeon 5500 (Nehalem EP) chips. The products aimed at the HPC space include a rack server, two new Constellation-class blades, a Lustre-based storage system, a number of rather interesting InfiniBand products, and a Sun cooling door. According to Michael Brown, Sun’s marketing manager for HPC, the upgraded product set represents “almost an end-to-end revamp of the HPC offerings.”

Perhaps the simplest new offering is the X2270 rack server, a 1U dual-socket box that incorporates the new Nehalem EP chips. The X2270 is basically an upgrade of the X2250, which used the previous generation Harpertown processors. The rack servers are targeted at mid-sized commercial HPC environments, such as you might find in financial services or electronic design.

But Sun has directed most of its engineering smarts at the Constellation blade systems, where the new Nehalem processors and Quad Data Rate (QDR) InfiniBand technology have been used to build a more advanced platform for high-end clusters. The key new product is the Nehalem-equipped X6275 blade, a dual-node blade, where each node can house two quad-core chips. (In essence, Sun built a four-socket blade with dual-socket hardware.) Doing the math, that means each blade provides 16 cores, and thanks to Nehalem’s simultaneous multithreading, up to 32 threads.

The blade fits into Sun’s 6048 chassis, and because it’s a dual-node setup, 96 nodes (768 cores) can be squeezed into a single 42U enclosure. At this maximum configuration, a single chassis can deliver 9 teraflops. Although that’s rather impressive, according to Sun’s own Web site, that would work out to only 0.8 teraflops better than an enclosure fully populated with the X6440, the company’s four-socket AMD quad-core blade. Note also that the Intel blade memory maxes out at 192 GB (that is, as soon as the 8 GB DDR3 server DIMMs hit the streets), while the AMD blade can house up to 256 GB, although the latter uses the somewhat slower DDR2 memory.

What really sets the X6275 apart are the new networking and I/O capabilities, which will allow the blade to inhabit petaflop-sized systems with thousands of CPUs. Each node includes an onboard QDR InfiniBand host channel adapter (HCA), Gigabit Ethernet, and a PCIe ExpressModule slot. A SATA interface is also available to connect to an optional Sun flash module, which offers 24 GB of high performance storage per node. It’s designed for users interested in saving state, having a scratch data area, or booting an OS. Since the flash module is hooked up to a SATA controller, to the apps it looks like a hard drive.

The other new blade is the X6270, which is less computationally dense and is geared for more general-purpose HPC and commercial duty. This one is a full-height blade that is a single-node version of the x6275, and can hold up to 144 GB of memory. Since it only has half as many cores as its dual-node sibling, the x6270 actually offers a better byte per flop ratio. It also provides four interfaces for on-board disks, with optional RAID, plus two GigE ports. The better memory ratio, additional I/O and extra networking make this blade more versatile, and it would tend to be a better fit where compute density is not the overriding factor, as, for example, in the head node of an HPC cluster.

Along with the blades, Sun announced a number of new Sun-branded InfiniBand products. The first one is a QDR InfiniBand Network Express Module (NEM) for the 6048 chassis. It’s essentially an InfiniBand leaf switch that can link up to 24 nodes. Since four of these modules fit in a single 6048 enclosure, all 96 nodes can be accommodated without any external switch hardware. The NEMs can be directly connected to datacenter InfiniBand switches in a fat-tree topology or to other NEMs in a 3D torus mesh. The goal here is to reduce cables and extra switch hardware in these ultra-dense blade setups.

Sticking with the InfiniBand theme, Sun also introduced a PCI Express QDR InfiniBand expansion module, which can provide a second QDR link via the PCIe interface. The additional link means you can have 80 Gbps of InfiniBand per node, which could be split between compute and storage, or simply aggregated for additional bandwidth.

In addition, the company previewed its “Project M9,” a 648-port InfiniBand datacenter switch. The hardware will be based on the same technology as Sun’s current 3,456-port switch used in TACC’s Ranger supercomputer cluster. According to Brown, the M9 will use 75 percent less space than traditional InfiniBand switches and will make use of 12X InfiniBand cabling, which will allow it to route three connections per cable. Once again, the idea is to minimize the hardware footprint. Brown notes the upcoming switch could be used to hook together non-Sun servers and storage.

Also announced was Sun’s new cooling door, which was previewed in November at SC08. The door fits in the rear of a 6048 chassis and relies on passive cooling, so no additional fans or power is required. There are two flavors: one that uses chilled water and one that uses a refrigerant gas. They are designed to handle a thermal load of up to 35KW per rack. Since studies show this type of system can reduce cooling costs by up to 84 percent, more and more datacenters are turning to liquid cooling to cut down on power consumption.

On the storage side, Sun has unveiled an integrated Lustre storage system, which is designed to scale from 48 TB up to multiple petabytes. The storage component options include the Sun Fire X4540 and X4250, and the Sun Storage J4400 and J4200 storage arrays. Expansion is accomplished by adding more storage modules. The idea here is to offer a pre-packaged Lustre solution for HPC apps. Since the system is not tied to Sun server gear, Brown thinks there’s an opportunity to sell these systems to HPC users whose systems are under-configured from a storage perspective. He says Sun has some early customers for the systems, but they haven’t gone public yet.

A number of customers have already signed up for Constellation supers based on the new hardware, including the Australian National University (ANU), Australia’s Bureau of Meteorology, South Africa’s Centre for High Performance Computing (CHPC) and the University of Zurich. These are in addition to installations of Nehalem-based blades at Sandia National Laboratories, Forschungszentrum Jülich, and RWTH Aachen University, which were announced back in November.

As far as how these new offerings will play in a depressed economy, Brown thinks that despite the current downturn, there’s still a lot of demand for high performance computing gear. “We’re seeing very strong uptake in the HPC area,” he says. “We’ve sold over two petaflops of HPC solutions based on Sun blade design.” Brown says he’s spoken with a number of people in the higher education sector that are applying for supplemental NSF and NIH funding that will be drawn from the US government’s stimulus package. Brown realizes that not all of that money will be heading to Sun, but he’s optimistic that the company will see its fair share.

Overshadowing all these announcements is the question of whether Sun plans to sell the business or go it alone. Since the IBM deal devolved into an April fool’s joke, Sun’s uncertain status has left customers wondering about the future of the company. Sun is not speaking publicly about its next move, so for the time being, it looks like the company will let its products do the talking.

]]>Today, Appro launched a blade-style cluster server called GreenBlade that the company hopes will steal some thunder – and some business – from Cray’s new entry CX1 Windows-based blade server. And Appro is also hoping to take some revenues from traditional blade and rack server makers like Hewlett-Packard, IBM, Dell, and Sun, who peddle their products to run supercomputing applications too.