HPCwire » idataplexhttp://www.hpcwire.com
Since 1986 - Covering the Fastest Computers in the World and the People Who Run ThemTue, 31 Mar 2015 19:48:35 +0000en-UShourly1http://wordpress.org/?v=4.1.1IBM Dials Up Density for HPC and Hyperscalehttp://www.hpcwire.com/2013/09/11/ibm_dials_up_density_for_hpc_and_hyperscale/?utm_source=rss&utm_medium=rss&utm_campaign=ibm_dials_up_density_for_hpc_and_hyperscale
http://www.hpcwire.com/2013/09/11/ibm_dials_up_density_for_hpc_and_hyperscale/#commentsWed, 11 Sep 2013 07:00:00 +0000http://www.hpcwire.com/2013/09/11/ibm_dials_up_density_for_hpc_and_hyperscale/Today IBM announced NextScale, which will eventually evolve into the place of its iDataPlex systems. Tapping the power of the new Ivy Bridge processors, coupled with eventual support for a host of accelerated options (GPUs, Xeon Phi and likely other processor choices) the company also put its stake in the ground for hyperscale and HPC..

IBM is casting an ever-widening net to cover a broader range of workloads starting with today’s announcement of its NextScale system, which is designed to use a stripped-down x86 lure to reel in everything from the cloudy heights to high performance computing.

That lure is surprisingly simple–and all by design. Server buyers on all sides of the compute spectrum are being pulled into the swift current of stripped-down, purpose-driven and cost-conscious hardware. IBM has remained one of the few that continued to swim upstream of this raw-box approach–but it’s catching up–and throwing some performance awareness into the mix.

Adding to their x86 drive, the star of NextScale is the Ivy Bridge processor (one or two) which are nestled within a half-width NextScale nX360 server. These snap snugly into the host n1200 enclosure (a 6U, 12-bay chassis) that can double up to host 84 of these pared-down boxes—or 2,016 cores–in a standard 19-inchrack.

And “standard rack” is the key word here, at least for those who bit the iDataPlex line but were scared off by the strange manipulations of space and time necessary to maneuver around its (useful but) non-standard design. In other words, IBM has taken a “best of all worlds” approach with the NextScale system, pulling the MVP features from iDataPlex and Flex alike to create something that might be able to go head-to-head with the wave of hyperscale (and hypercheap) solutions that are flooding the market.

This new server strain puts IBM in a much larger petri dish with competitive offerings from HP (namely the SL6500 series and the half-width SL390S) as well as similar lines from Dell (the C8000 series, in particular). But the difference here, says System X Product Manager, Gaurav Chaudhry , is that they’re able to offer integration with some of IBM’s key initiatives and products, including their recently acquired workload management tools from Platform Computing, full GPFS support and for the cloudy side, ready-made APIs and binaries for xCAT which, while free to begin with, is being actively supported by IBM.

Dubbed the “economical addition to the System X family” IBM says that this approach offers the density, performance and flexibility to support the diverse targeted workloads. For the HPC crowd, there are certainly some features worth noting—but generally speaking, this is a pared-down approach that lets users build what they need at a price point that’s relative to other bare-bones boxes from competitors. But there are still some things that are off in the future—enough so that we might not see many of these finding their way into the Top 500 before next ISC.

The NextScale announcement is really about possibility and the future, at least for HPC buyers. So far, this is an Intel-only offering, with support for GPUs and Xeon Phi coming in early 2014. While there were no timeframes stated for other additions, including Power, ARM or other processors, it stands to reason that IBM wouldn’t want to be left behind as others march to the beat of customer demand.

With that said, to snap in spicy elements like GPUs or Phi means a need for more sophisticated cooling. One notable missing element with NextScale is the direct water cooling. The version announced today is air-cooled with the possibility of passive water cooling. Chaudhry says that direct water is coming right in line with other acceleration/co-processor capabilities.

That omission aside, there are some other notable elements that will appeal to the HPC set. For instance, IBM will offer well-rounded support for Infiniband, including FDR and QDR. There is no integrated I/O or switching, no chassis-level management, and the attractive part of iDataPlex, namely its front access capabilities to almost all components, was carried over.

For the cloud and general datacenter users, there will be two standard gigabit Ethernet ports, although they’ll be glad to sell additional capability to add 10GbE as an option as well. While HPC is all about the IOPS, latency and general performance, Chadhury says that the cloud customers simply want to get up and running as soon as possible—it’s all about time to delivery, he says

As IBM’s David Watts noted of the new nx360 server addition:

The IBM NeXtScale nx360 M4 compute node contains only the essential components in the base architecture…

1 or 2 Intel Xeon E5-2600 v2 series processors

Up to 8 ECC DIMMs operating up to 1866 MHz providing a total memory capacity of up to 128 GB

IBM took their cues from the positive responses around offering front accessibility (as with the iDataPlex systems) and carried that over to NextScale. Chaudhry freely admits that while iDataPlex was made for its own configuration and offered little flexibility, the idea with NextScale was to “keep all the things we liked and get rid of other things, like the full-width server versus this 8.5 inch wide but deep (versus shallow) approach.”

On that “best of all worlds” note touched on earlier, Chaudhry highlights the difference between the Flex versus NextScalelines s in the increased ability to go as “vanity-free” as one wishes. “So say if a customer isn’t looking for the integrated switching built into the chassis, they have the flexibility to pick their own,”

The key concepts behind this launch are around flexibility, simplicity and scale—in short, a move away from the tricky design and implementation details of their iDataPlex but with more room to grow than Flex might offer for some users. IBM Product Marketing Manager for the System X line, Gaurav Chaudhry says that this isn’t the immediate end of the line for iDataPlex. It simply marks an evolution toward flexible systems that can meet the low latency, high performance, and I/O demands of HPC while remaining lightweight and simple enough for cloud users to hop into without a great deal of effort. The company still has a number of iDataPlex systems to support, which will continue for at least 18 months, but the hyperscale, low-cost NextScale is the real battlefield for IBM’s push—and the shove will come down to price.

As a side note, it seems a shame that the Flex line was named “Flex” versus NextScale—seems the true meaning of these are flip-flopped, with the Flexes emphasizing scalability and NextScale pushing flexibility.

]]>http://www.hpcwire.com/2013/09/11/ibm_dials_up_density_for_hpc_and_hyperscale/feed/0Yellowstone Helps Predict Air Pollutionhttp://www.hpcwire.com/2013/03/28/yellowstone_helps_predict_air_pollution/?utm_source=rss&utm_medium=rss&utm_campaign=yellowstone_helps_predict_air_pollution
http://www.hpcwire.com/2013/03/28/yellowstone_helps_predict_air_pollution/#commentsThu, 28 Mar 2013 07:00:00 +0000http://www.hpcwire.com/?p=4133The Yellowstone supercomputer has a 1.5-petaflop I-data plex system at peak. The machine was first tasked with 11 compute-intensive projects as part of the Accelerated Scientific Discovery (ASD) initiative.

]]>The Yellowstone supercomputer has a 1.5-petaflop I-data plex system at peak. With 72,288 processor cores, the machine is powerful enough for No. 13 on the Top500. The machine was first tasked with 11 compute-intensive projects as part of the Accelerated Scientific Discovery (ASD) initiative.

Yellowstone is based on IBM’s iDataPlex architecture and can perform 29x the workload throughput of NCAR’s Bluefire, which was decommissioned on January 31. It is capable of performing one-and-a-half quadrillion operations a second and stores eleven petabytes of information, one thousand times the total print holding of the Library of Congress.

The ASD initiative provides these large-scale computational resources to a small number of projects for a short time period. These projects help give the system a workout and allows for the pursuit of scientific objectives that otherwise would not be possible through normal allocation opportunities.

These projects, chosen at the National Center for Atmospheric Research (NCAR), were part of the system’s original purpose. The supercomputer carried out large amounts of computing over a two-month period, investigating timely issues surrounding Earth and its atmosphere, such as creating better long-range weather forecasts and closing the spatial gap between model cloud dynamics and cloud microphysics.

Yellowstone has customized Geyser and Caldera clusters, which are specialized data analysis and visualization resources within Yellowstone’s data-centric environment. These systems provide a 20-fold increase in Computational and Information Systems Laboratory’s (CISL) dedicated data analysis and visualization resources. With 16 large-memory nodes and 1 TB of memory per node, Geyser is designed to facilitate large-scale data analysis and post-processing tasks, including 3D visualization; Caldera also has 16 total nodes, with two NVIDIA Tesla GPUs per node, to support parallel processing, visualization activities, and development and testing of general-purpose GPU code.

Taken together, these components improve capabilities central to NCAR’s mission, such as supporting the development of climate models, weather forecasting, and other critical research.

One of these projects selected by NCAR involved predicting North American air quality through the year 2055. Gabriele Pfister of NCAR led the project, which had 6.25 million core hours allocated to Yellowstone. The study performed simulations with the nested regional climate model with chemistry (NRCM-Chem) to study possible changes in weather and air quality over North America between present-day and two future time periods: 2020-2030 and 2045-2055. This will provide insights into expected future changes related to air quality and will also be used for dynamical downscaling (of meteorology and air quality) of global climate simulations performed at NCAR.

]]>http://www.hpcwire.com/2013/03/28/yellowstone_helps_predict_air_pollution/feed/0Hartree Centre Puts $45 Million Toward UK Innovationhttp://www.hpcwire.com/2013/02/21/hartree_centre_puts_45m_to_boosting_uk_innovation/?utm_source=rss&utm_medium=rss&utm_campaign=hartree_centre_puts_45m_to_boosting_uk_innovation
http://www.hpcwire.com/2013/02/21/hartree_centre_puts_45m_to_boosting_uk_innovation/#commentsThu, 21 Feb 2013 08:00:00 +0000http://www.hpcwire.com/?p=4187<img src="http://media2.hpcwire.com/hpcwire/dansbury.jpg" alt="" width="95" height="93" />With $45 million in government funding, the research center will develop software to make supercomputers more efficient and to help process data from the SKA, the world's largest radio telescope. The technology is being developed with industry partners, and will be made available to scientific and industrial organizations in the UK.

]]>On February 1, 2013, the UK Chancellor of the Exchequer, Rt. Hon. George Osborne, visited the Science and Technology Facilities Council (STFC) site in Daresbury to formally open the Hartree Centre, which will focus on developing software to improve the energy efficiency of supercomputers. Or, to put it another way, “Osborne pulled the string and opened the curtain and unveiled the plaque,” says Mike Ashworth, head of the Hartree Centre.

The ceremonial opening of the Hartree Centre marks a new phase of government and industry collaboration in the development of high-performance computing the UK. A primary goal is to bring together industry, academia and government organizations to use supercomputers to increase the competitiveness of UK industry.

The ceremony also came with a pledge for funding: more than $45 million to create its energy efficient computing technologies for industrial and scientific applications, especially for supercomputers handling big data projects. About $17 million will go to creating software for Square Kilometer Array (SKA), the world’s largest radio telescope. The rest goes into two camps: next-generation software for Grand Challenge science projects, and software to allow industry to make better use of high-performance computing and computational science.

The software research will focus on creating new code to efficiently exploit new computer architectures that will be emerging in the next five to 10 years. “We’re trying to structure that code in a flexible way so that it’s not tied into any one architecture, but reveals multiple levels of parallelism, so that we’re ready to exploit large numbers of light weight cores [used as] accelerators,” says Ashworth.

The Hartree Centre has not yet decided how the money will be finally allocated, but it’s likely to include research on Xeon Phi processors, possibly NVIDIA’s latest generation of Kepler GPUs, and very probably FPGAs.

Yes, Ashworth sees new potential in FPGAs for supercomputing. STFC researchers first looked at using FPGAs for HPC about 10 years ago, but the chips weren’t very fast and were difficult to program. They required programming at the hardware level using VHDL. Now, of course, the chips are much faster and support double-precision, which is required for a lot of scientific applications. Ashworth notes that he’s “very keen on exploring” technology from Maxeler, which has high-level interfaces to FPGAs. He wants to explore how to make FPGAs useful for the kinds of research that Hartree will emphasize.

Energy efficiency is a very prominent part of the center’s mandate. “We’re interested in looking at how key applications perform in terms of their energy efficiency,” says Ashworth. “In the past, computing efficiency meant FLOPS. Now it’s FLOPS per watt. In the past it was time to a solution. Now we’re more interested in the number of watts to achieve a certain solution.”

This is inspired both by government targets to reduce carbon emissions and to save money – which, of course, go hand in hand, since both involve reducing energy consumption.

The center has some pretty heavy-duty hardware to work with: the UK’s most powerful supercomputer, already being made available for research by industry and scientific organizations through STFC. In mid-2012, STFC installed an IBM Blue Gene/Q system, named Blue Joule. It consists of seven racks of servers with 114,688 1.6 GHz cores and 112 TB RAM. When it was fired up last summer, it reached 1.2 petaflops, the first computer in the UK to pass 1 petaflop. That rated it 13th on the TOP500, but has slipped to #16 in the most recent list.

That equipment is accompanied by an IBM iDataPlex system, dubbed Blue Wonder, with 8,192 Sandy Bridge cores for 158.7 teraflops of processing power.

STFC didn’t just buy the computer, however, it got IBM as a partner. “Rather than having vendors just supply us with hardware, we specifically said in the procurement that they must enter into a collaboration with us,” says Ashworth.

In fact, there are several corporate collaborators involved, including Intel, OCF, Mellanox, DataDirect Networks and ScaleMP. Each is contributing some combination of components, services, technical expertise and/or business development expertise. IBM and OCF, for example, help the Hartree Centre find corporate partners to set up joint projects. “When we go into a room with an industrial potential partner, we’ll go in with somebody from IBM,” says Ashworth. “That adds very much to the prospects of landing that business.”

Those partnerships work both ways. One of Hartree’s mandates is to help UK companies make better use of high-performance computing and computational science. To that end, he wants to focus research on accelerators that can help achieve higher performance at lower cost.

“We see the Hartree Center as a testing ground for novel architectures,” says Ashworth. “We can buy a piece of hardware, a development platform, and make it available to academics, make it available to industry. In collaboration with our expertise, we learn how to use the hardware, and set up joint projects with people we believe would benefit from that hardware, and push forward the UK’s ability to exploit these new technologies for the future. We’re looking at a 5-10 year time frame to leverage a lot of these technologies.”

The research priorities are environment, energy, developing new materials, life sciences and human health, and security. One of the Grand Challenge projects at STFC, for example, is a three-way collaboration between STFC, the Met Office and the Natural Environment Research Council (NERC) to develop brand new code for weather forecasting and for climate change studies using supercomputers. Industrial applications might include projects such as computer modeling to create, say, new industrial adhesives or new drugs.

The UK government expects the money invested in Hartree will pay off many-fold by helping industry exploit supercomputing technology to become more competitive.

]]>http://www.hpcwire.com/2013/02/21/hartree_centre_puts_45m_to_boosting_uk_innovation/feed/0The Weekly Top Fivehttp://www.hpcwire.com/2011/05/12/the_weekly_top_five/?utm_source=rss&utm_medium=rss&utm_campaign=the_weekly_top_five
http://www.hpcwire.com/2011/05/12/the_weekly_top_five/#commentsThu, 12 May 2011 07:00:00 +0000http://www.hpcwire.com/?p=4855The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover the Cray/Sandia partership to found a knowledge institute; RenderStream's FireStream-based workstations and servers; NVIDIA's latest CUDA centers; Reservoir Labs and Intel's extreme scale ambitions; and Jülich Supercomputing Centre's new hybrid cluster.

]]>The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover the Cray-Sandia partership to found a knowledge institute; RenderStream’s FireStream-based workstations and servers; NVIDIA’s CUDA center growth; Reservoir Labs and Intel’s extreme scale ambitions; and Jülich Supercomputing Centre’s new hybrid cluster. Plus a bonus section.

Cray, Sandia Combine Efforts to Foster Knowledge Discovery

Prominent supercomputer vendor Cray Inc. and Sandia National Laboratories have come together to establish the Supercomputing Institute for Learning and Knowledge Systems (SILKS). This is a Cooperative Research and Development Agreement (CRADA), a private-public collaboration, which aims to promote knowledge discovery, data management and informatics computing. SILKS is located at Sandia’s Albuquerque-based headquarters and draws from its founders’ hardware and software resources as well as the experience and knowledge of their research staff.

The founding partners declared three primary goals for the endeavor:

1. Accelerate the development and application of high performance computing (HPC) technologies focused on solving knowledge discovery, data management and informatics problems at scale.

2. Collaborate to overcome the implementation barriers to a wider adoption of data-driven HPC computing technologies in knowledge discovery, data management and informatics.

3. Apply the use of these technologies to enable discovery and innovation in science, engineering and for homeland security.

The broad-based agenda will tackle a range of technology domains, including software, hardware, services, education and outreach. Representatives from both Sandia and Cray anticipate the collaborative effort will result in cutting-edge technologies and solutions.

RenderStream Releases AMD-based Servers and Workstations for OpenCL

Addressing the need for GPU-accelerated HPC, 3D workstation specialist RenderStream has launched its AMD Radeon-based servers and workstations for OpenCL. These 21.6 teraflop systems also support OpenGL and Brooks based applications and product development. According to a company statement, GPGPU high-performance computing using AMD GPUs shows great potential for information security, medical imaging, computer graphics and rendering, server side rendering, finite-difference-time-domain (FDTD), electro-magnetics, physics, bio-science and EDA.”

RenderStream’s AMD Radeon HD 6970 based VDACTr8 and its HD 6990 based VDAC4x2 implement 1,536 stream processors and eight GPUs per system, providing access to 12,288 cores and 21.6 teraflops of computing power when operating at an over-clocked peak performance.

The official announcement illustrates the server’s performance-boosting capabilities with this real-world example from the field of information security:

Using the integer-based oclHashCat, RenderStream’s customers are seeing near linear scaling in computational power which simply trounces the 4,096 cores and 12.6 teraflops of our GTX 580 based VDACTr8. In this example the HD 6970 and HD 6990 based VDACTr8 evaluated over 45 billion solutions per second versus 18 billion for the GTX 580 based systems, depending on the implementation.

RenderStream offers general purpose GPU systems as well as HPC-specific GPU-based platforms outfitted with either NVIDIA Tesla or AMD FireStream graphics processors.

NVIDIA CUDA Centers Number Four Hundred

This week, NVIDIA announced the addition of 35 new CUDA Research Centers and CUDA Teaching Centers, bringing the total number of such centers to 400. The latest partner institutions come from 14 countries, evidence of parallel computing’s — and NVIDIA’s — global reach.

The centers will leverage the parallel computing power of NVIDIA’s CUDA-based GPUs to tackle a bevy of challenging computing issues, as well as teach thousands of students cutting-edge GPU programming skills. CUDA Research Centers employ GPU computing across multiple domains, while the CUDA Teaching Centers have incorporated GPU computing techniques into their main computer programming curriculum. NVIDIA explains that its CUDA Research Center Program “fosters collaboration at institutions that are expanding the frontier of parallel computing.” Partners benefit from “exclusive events with key researchers and academics, a designated NVIDIA technical liaison, and access to specialized online and in-person training sessions.”

For a full listing of the newest CUDA Research Centers and CUDA Teaching Centers, see the official announcement.

Reservoir Labs, Intel Partner on DARPA UHPC Project

Reservoir Labs announced it will collaborate with Intel researchers on the development of compiler technologies and architectures in order to create viable extreme scale computing by the year 2018. The duo have signed a subcontracting agreement that brings Reservoir Labs research scientists and technologies to Intel’s team to develop Extreme Scale computing technologies as part of DARPA’s Ubiquitous High Performance Computing (UHPC) research program.

According to the release: “The goal of the UHPC program is to develop 1 PFLOPS (HPL) single cabinet systems, including self-contained cooling, that overcome significant energy efficiency, security, and programmability challenges. Essentially this can be viewed as integrating the computational capacity of today’s largest supercomputers in 100x less area, with 100x less power, and with significant increases in programmability and applicability.”

Intel’s UHPC team is tasked with supporting and developing technologies to enable the US to build extreme scale computers by the year 2018. In order for this challenging goal to come to fruition, major breakthroughs in hardware and software design will be necessary, far beyond the level of current commercial offerings. Just improving the energy efficiency levels of computers by more than 100x will require significant advancements.

If these goals are achieved, the resulting technology would benefit embedded applications, such as those found in ship, land, and air-based Department of Defense systems. Exteme scale systems would also further other DoD objectives, such as Intelligence Surveillance Reconnaissance (ISR), Electronic Warfare (EW), Integrated Air and Missile Defense (IAMD), battle management and planning, and cyber security.

The initial contract calls for the project to furnish a “proof of concept” implementing extreme scale technologies in a first-pass system design by 2012. A second phase is also outlined, which if DARPA elects to continue, could lead to a completed system design for 2014 timeframe. The full scope of the contract specifies the delivery of a prototype extreme scale system in 2018.

Jülich Supercomputing Centre Debuts Hybrid System

A new GPU-accelerated system will support high-level research at the Jülich Supercomputing Centre (JSC) in Germany. The hybrid cluster, named JUDGE, for “Jülich Dedicated GPU Environment,” relies on GPUs to boost processing power, while minimizing energy consumption. JUDGE will be used for data-intensive workloads in the fields of biology, medicine and environmental research.

The cluster was built using 54 IBM System x iDataPlex server nodes with 12 cores each and 96 GB memory, as well as 108 NVIDIA M2050 GPUs. The release describes IBM iDataPlex as “a scalable system that can significantly reduce energy consumption, cooling and space requirements.”

Martin Hiegl, the team leader for Deep Computing Sales at IBM Germany, commented, “Together with JSC’s other powerful supercomputers, the new JUDGE cluster supports Germany’s ability to tackle a wide range of scientific and technical challenges.”

Sales leader for HPC at NVIDIA, Stefan Kraemer, believes the hybrid design, which relies on the GPU’s accelerative force, will be the template for the coming exascale generation. “The JUDGE cluster is a good example of how we need to continue to develop computers in the future, following the target of exascale computing. This is valid not only in regard to performance, but also to energy consumption and energy efficiency,” he states, adding: “Pilot projects like JUDGE play a key role in this process and are a key step on the way to hybrid systems.”

JUDGE is not the first IBM/JSC collaboration. The duo united to create the QPACE supercomputer, which consistently ranks among the top ten of the Green500 list of the world’s most energy efficient supercomputers, and also worked together on the Blue Gene/P-based JUGENE, one the most powerful computers in Europe with a peak performance of more than one petaflop.

Bonus News:

There was such a grand allotment of noteworthy news this week that we are presenting our first ever bonus link section:

]]>http://www.hpcwire.com/2011/05/12/the_weekly_top_five/feed/0The Week in Reviewhttp://www.hpcwire.com/2010/07/08/the_week_in_review/?utm_source=rss&utm_medium=rss&utm_campaign=the_week_in_review
http://www.hpcwire.com/2010/07/08/the_week_in_review/#commentsThu, 08 Jul 2010 07:00:00 +0000http://www.hpcwire.com/?p=5208OCF doubles computing power for University of Edinburg researchers; and Aquasar system with innovative water-cooling technology deploys at ETH Zurich. We recap those stories and more in our weekly wrapup.

UK HPC integrator OCF plc has completed significant upgrades to the University of Edinburg’s HPC system, known as ”Eddie” (get it?). The enhancements have more than doubled the computing power available to multi-disciplinary researchers, enabling them to run more complex computer simulations, more quickly. The new Eddie will benefit innovations in fields such as bioinformatics, speech processing, particle physics, material physics, chemistry, cosmology, medical imaging and psychiatry.

Reflecting a current, and much needed, trend, the new system will generate less heat, despite its increased power, and will actually use less energy than its previous iteration. This is due in part to energy-efficiency upgrades in the Intel Westmere platform, as well as water-cooling features that remove all of the heat generated by the system close to the source. Additionally, the Scottish air helps cool the water year-round.

This is the first UK deployment of Intel’s Westmere E5620 Quad Core processors in IBM iDataPlex servers. According to the press release, the HPC system design incorporates the following:

40 TB of high performance data storage using IBM System Storage DS5100 and a combination of fibre channel and solid state drives, fully integrated with an existing 90 TB of SATA storage using IBM’s General Parallel File System (GPFS).

The new-and-improved Eddie went live this month, and another renovation is already scheduled for 2011, with a plan to increase computing power five-fold. Design, build, configuration, implementation and support of the system upgrade will again be provided by OCF.

First-of-a-Kind Water-Cooling Techology Deployed

For those of you keeping an eye on the novel water-cooling technology that is Aquasar, the system is now fully operational. IBM announced this week that it has delivered an innovative hot water-cooled supercomputer to the Swiss Federal Institute of Technology Zurich (ETH Zurich). The system, dubbed Aquasar, consumes up to 40 percent less energy than a comparable air-cooled machine, and by using the waste heat to supply warmth to university buildings, cuts carbon dioxide emissions up to 85 percent.

Aquasar began developement a year ago as part of IBM’s First-Of-A-Kind (FOAK) program. The supercomputer consists of special water-cooled IBM BladeCenter servers and also includes traditional air-cooled IBM BladeCenter servers, to allow for direct comparisons. Together, the system has six teraflops of power and an energy-efficiency of 450 megaflops per watt. Nine kilowatts of thermal power, waste heat from the system, are fed into the ETH Zurich’s building heating system.

Water is an excellent coolant, with the ability to remove heat about 4,000 times more efficiently than air. An overview of the liquid-cooling process is included in the announcement:

The processors and numerous other components in the new high performance computer are cooled with up to 60 degrees C [140 degrees F] warm water. This is made possible by an innovative cooling system that comprises micro-channel liquid coolers which are attached directly to the processors, where most heat is generated. With this chip-level cooling the thermal resistance between the processor and the water is reduced to the extent that even cooling water temperatures of up to 60 degrees C ensure that the operating temperatures of the processors remain well below the maximally allowed 85 degrees C [185 degrees F]. The high input temperature of the coolant results in an even higher-grade heat at the output, which in this case is up to 65 degrees C.

For more information, including a short video, check out a prior announcement here.

Aquasar is installed at the Department of Mechanical and Process Engineering at ETH Zurich.

]]>http://www.hpcwire.com/2010/07/08/the_week_in_review/feed/0IBM, Arctur Partner to Bring Supercomputing to the Midmarkethttp://www.hpcwire.com/2010/06/29/ibm_arctur_partner_to_bring_supercomputing_to_the_midmarket/?utm_source=rss&utm_medium=rss&utm_campaign=ibm_arctur_partner_to_bring_supercomputing_to_the_midmarket
http://www.hpcwire.com/2010/06/29/ibm_arctur_partner_to_bring_supercomputing_to_the_midmarket/#commentsTue, 29 Jun 2010 07:00:00 +0000http://www.hpcwire.com/?p=9423IBM and Slovenian software developer Arctur have signed an agreement to build one of the most powerful supercomputers in the region. Arctur will allow midmarket companies to lease time on the iDataPlex system.

]]>IBM and Slovenian software development firm Arctur have agreed to partner on an iDataPlex system that, with the help of the cloud, will allow Slovenian companies to lease time to speed product cycles and enhance business competition. According to Arctur, this system will allow up to a 75% reduction in product development time. The iDataPlex supercomputing, which will run on Linux (and sounds quite similar to the iDataPlex system in use for the Magellan cloud testbed in many ways) will be able to perform in the range of 10 trillion calculations per second (rpeak teraflops) and is expected to hit the 25-teraflop mark in the foreseeable future. It will make use of the new Intel 6-core processors and QDR Infiniband for peak energy efficiency and performance.

]]>NVIDIA’s GPU computing ambitions got a major boost today with IBM’s announcement of the iDataPlex dx360 M3. The new HPC server pairs two Tesla GPUs with two CPUs inside the same server chassis. As such, IBM represents the first Tier 1 server vendor to bring CPU-GPU “hybrid” computing to the high performance computing market.

“This is the first time we’re in a mainstream server,” says NVIDIA’s Sumit Gupta, senior product manager for the Tesla GPU computing group. Last week, Appro, Supermicro, AMAX and Tyan announced integrated CPU-GPU server gear based on NVIDIA’s new Fermi architecture Tesla 20-series devices. What IBM provides is a broad global sales channel and unmatched brand recognition.

All these systems, including the new iDataPlex from IBM, make use of the latest Tesla M2050 computing modules that can be integrated into a CPU-based host system. Each M2050 delivers 515 gigaflops of raw double precision floating point performance (or 1,030 gigaflops single precision), and comes with 3 GB of GDDR5 memory. IBM customers can also opt for the M2070, which offers the same floating point performance, but with 6 GB of local GPU memory.

The base configuration on the new iDataPlex consists of a two-socket motherboard with the latest Intel Xeon CPUs. A riser card is used to hook in the Tesla modules. The configuration allows for relatively easy maintenance and replacement of the GPU components.

IBM’s move into the GPU computing space is a big win for NVIDIA and for GPU acceptance in HPC, in general. Over the past couple of years, the company had remained very quiet on the GPU computing front, and there were no indications it would be adding this capability to its HPC lineup. “I think what’s changed is that customers have been experimenting for a long time and now they’re getting ready to buy,” says Dave Turek, vice president of the deep computing group at IBM. “It’s as simple as that.”

According to Turek, IBM has been tracking customer demand for this capability for some time, and felt now was the time to jump onto the GPU computing train. From Turek’s point of view, this is less about the extra capabilities provided by NVIDIA’s new Fermi architecture (ECC memory, double precision, programmability) and more about the general increase in customer acceptance of the GPU computing paradigm. “If the marketplace hadn’t been ready at this time, we would have bypassed this for sure,” he admits. “It wasn’t the technology that drove us to do this. It was the maturation of the marketplace and the attitude toward using this technology.”

The company expects the new GPU-equipped iDataPlex to get the most traction in what have become the early adopter segments for GPU accelerated computing, namely the oil and gas industry, big science research at government labs and universities, and the biotech space (with perhaps some uptake by financial institutions). All of those segments have a few things in common that makes them an especially attractive target for GPU acceleration: a nearly endless need for more vector math capability, in-house programming expertise to push their apps over the GPU programming hurdle, and a limited dependency on ISVs who may or may not be interested in GPU support.

IBM’s decision to pursue the HPC market with a CPU-GPU offering is particularly relevant in another sense. Over the past couple of years, the company had pinned much of its hybrid supercomputing hopes on its own HPC variant of the Cell processor: the PowerXCell 8i. That processor was used to power the Roadrunner supercomputer, the first general-purpose computing system to break the Linpack petaflop barrier back in 2008. IBM still offers the Cell-based QS22 blades based on the PowerXCell 8i, but has halted plans to forge a successor to that chip design.

In fact, from IBM’s point of view, the GPU-equipped iDataPlex is just another entry in its rather large portfolio of HPC hardware. Between the new Power7-based 755 servers, the Blue Gene/P, and its x86-based iDataPlex gear, IBM has probably the broadest HPC offerings in the industry. The hybrid computing iDataPlex is another way the company thinks it can cover what has become a fairly diverse HPC market.

Turek says IBM will be careful not to overhype its new GPU-accelerated boxes. Although coprocessor acceleration seems to be in vogue right now, not every application is going to be able to take advantage of it. Certainly most matrix math-intensive apps will be able to realize a several-fold performance boost compared to a CPU-only implementation, but it really depends on how much of the code is engaged in these types operations and how much is just doing sequential threading.

If Linpack is a guide — and that’s really all it is — some apps will do very well indeed on the new Fermi GPUs. NVIDIA ran some benchmarks on its own CPU-GPU server, consisting of two Tesla C2050 cards (comparable to two M2050s) plus two Intel Xeon X5550 processors, with 48 GB memory. They found Linpack performance was eight times that of a comparable CPU-only server: 80.1 gigaflops for the CPU version versus 656.1 for the GPU-accelerated box. When they looked at price-performance and power usage, they found a five-fold advantage. So for $1 million worth of CPUs, you can get 10 teraflops of Linpack, while that same money spent on GPU-CPU gear will get you to 50 teraflops — and a certain spot on the TOP500 if you’re interested in HPC celebrity.

With IBM now in the GPU computing game, it’s almost a sure bet HP and Dell won’t be far behind. And with the tier 1 OEMs onboard, integrated CPU-GPU servers are likely to become standard operating equipment by most, if not all, HPC vendors over the next several months.