HPCwire » HPCMPhttp://www.hpcwire.com
Since 1986 - Covering the Fastest Computers in the World and the People Who Run ThemTue, 03 Mar 2015 20:27:41 +0000en-UShourly1http://wordpress.org/?v=4.1.1SGI Scores Second DoD Deal in Two Monthshttp://www.hpcwire.com/2014/12/18/sgi-scores-second-dod-deal-two-months/?utm_source=rss&utm_medium=rss&utm_campaign=sgi-scores-second-dod-deal-two-months
http://www.hpcwire.com/2014/12/18/sgi-scores-second-dod-deal-two-months/#commentsFri, 19 Dec 2014 00:53:58 +0000http://www.hpcwire.com/?p=16859SGI was awarded a contract worth $30,750,000 to supply the Air Force Research Laboratory (AFRL) with a 3.9 petaflops SGI ICE X supercomputer. This is the second time in the last two months that SGI has inked a major deal with the Department of Defense (DoD) for its ICE product. Both awards were allocated through Read more…

]]>SGI was awarded a contract worth $30,750,000 to supply the Air Force Research Laboratory (AFRL) with a 3.9 petaflops SGI ICE X supercomputer. This is the second time in the last two months that SGI has inked a major deal with the Department of Defense (DoD) for its ICE product. Both awards were allocated through the DoD’s High Performance Computing Modernization Program (HPCMP), which connects DoD scientists and engineers with the HPC resources they need to explore new theories, reduce the time and cost of developing weapon systems, and improve design quality. In October, SGI announced that it would be providing the US Army Engineer Research and Development Center (ERDC) with an ICE X supercomputer as part of a technology insertion contract. Both systems will facilitate mission-critical research and innovation for the Department of Defense’s (DoD’s) most significant challenges.

SGI’s Rebecca Noriega revealed the new system will have 3,576 nodes equipped with Intel Xeon E5-2699 v3 “Haswell” CPUs paired with both NVIDIA GPGPU nodes and Intel Xeon Phi accelerator nodes (178 of each). The operating system will be SUSE Linux Enterprise Server 11. The petascale system will be housed in the Defense Supercomputing Resource Center at the Wright-Patterson Air Force Base in Ohio, where it will support the DoD’s science and engineering communities on a wide range of application areas, spanning fluid dynamics, structural mechanics, materials design, space situational awareness, climate and ocean modeling and environmental quality.

“The new system will be co-located with the 8 M-Cell ICE system deployed as part of the DoD’s Spirit SGI ICE X supercomputer and will feature 6 M-Cells—SGI’s single largest M-Cell deployment,” writes Noriega. “SGI’s M-Cell technology provides industry-leading power and cooling efficiency.”

The AFRL contract also includes 12.4 PB of SGI InfiniteStorage 5600 storage on NetApp E-Series technology, running Intel Enterprise Edition for Lustre software. Department of Defense documents indicate the project will be completed by July 3, 2019.

The October deal between the DoD and the U.S Army Engineer Research and Development Center was for a 4.6 petaflops SGI ICE X supercomputer and an InfiniteStorage 5600 storage system to be installed in a new multi-million dollar facility at the US Army ERDC Information Technology Laboratory in Vicksburg, MS. When the system was announced, SGI referenced it as “the fastest unclassified supercomputer in the DoD.”

Cray has also been tapped to supply the DoD with a powerful compute and storage infrastructure. The supercomputer maker will provide the Defense Department with two XC40 supercomputers and two Sonexion storage systems as part of a $30 million contract with the High-Performance Computing Modernization Program. The US Navy DoD Supercomputing Resource Center at the John C. Stennis Space Center, one of the five supercomputing centers established by the HPCMP, will house the Cray systems, which will be commandeered for high-resolution, coastal-ocean circulation and wave-model oceanography research in support of Navy and DoD operations worldwide.

]]>http://www.hpcwire.com/2014/12/18/sgi-scores-second-dod-deal-two-months/feed/0Army Wins Major Supercomputing Awardhttp://www.hpcwire.com/2014/10/07/army-wins-major-supercomputing-award/?utm_source=rss&utm_medium=rss&utm_campaign=army-wins-major-supercomputing-award
http://www.hpcwire.com/2014/10/07/army-wins-major-supercomputing-award/#commentsTue, 07 Oct 2014 22:25:47 +0000http://www.hpcwire.com/?p=15565The US Army Research Laboratory is getting $500,000 and one billion hours of supercomputing time to study the inner workings of internal combustion engines. The award was granted by the Department of Defense’s High Performance Computing Modernization Program (HPCMP) Frontier Project, now in its second year. The Army Research Lab will receive $100,000 per year Read more…

]]>The US Army Research Laboratory is getting $500,000 and one billion hours of supercomputing time to study the inner workings of internal combustion engines. The award was granted by the Department of Defense’s High Performance Computing Modernization Program (HPCMP) Frontier Project, now in its second year. The Army Research Lab will receive $100,000 per year over the next five years as well as one billion hours on the DoD’s fastest supercomputers.

As outlined in an ARL press release, researchers in ARL’s Vehicle Technology Directorate and Iowa State University are investigating two key components of in-cylinder mixtures: spray atomization and liquid-solid spray interactions. The participants will use the DoD supercomputing allocation to carry out high-fidelity modeling with the aim of achieving a quantum leap in engine efficiency.

In internal combustion engines, the fuel and oxidizer combine in a combustion chamber to deliver force to engine components. The fuel-oxidizer mixture determines combustion quality and engine efficiency. Despite being an established technology, the turbulent spray atomization process remains an outstanding problem in multi-phase flows, according to Dr. Luis Bravo, the principal investigator in ARL’s Frontier project.

“This has been hindered in part by the well-known inaccessibility of the near nozzle optically thick region,” states Bravo, an Army mechanical engineer specializing in computational and thermal sciences. “As a result, coarse models and approximations have been used to simulate spray breakup which do not correctly represent the physics. Direct numerical simulations, as proposed in this work, are aimed at studying the fundamental mechanisms in regions where experimental access and analysis is difficult.”

“State-of-the-art high fidelity simulations carry a significant computational overhead arising from the large-scale physical disparities in turbulent atomizing flows. This approach will accelerate the development of next-generation internal combustion engines for aerial and ground combat vehicle applications and will feature significant increases in fuel economy and power densities.”

The objective of the Frontier program is to enable the exploration of science and technology outcomes that would not be achievable using typically-available HPCMP resources. Submissions are evaluated as to whether they represent a potential significant contribution to the scientific and engineering community and their requirements for HPC computational time and resources.

]]>A week after closing the books on 2009, Cray is busy building its 2010 business. On Wednesday, the company announced it had nabbed a $45 million contract with the US Department of Defense (DoD) to deliver three Baker-class supercomputers to the agency.

The three new supers are being procured for the DoD’s High Performance Computing Modernization Program (HPCMP), and specifically for the US Air Force Research Laboratory at the Wright-Patterson Air Force Base in Ohio; the Arctic Region Supercomputing Center in Fairbanks, Alaska; and the US Army Engineer Research and Development Center in Vicksburg, Mississippi. According to Cray’s press release, the contract is the largest HPCMP award to a single vendor.

The supercomputers will be used to support R&D for new materials, fuels, armor and weapons systems — what the US military sometimes euphemistically refers to as “product development.” The systems will also be put to use in military planning, humanitarian missions and long-term weather forecasting.

The $45 million multi-year contract includes services as well as hardware. From that we can surmise that the three machines are almost certainly sub-petaflop-level supers (unless Cray gave them a really, really sweet deal). Nonetheless, this represents a significant win for the company, and gives Cray’s upcoming Baker system a nice endorsement by an organization that has had plenty of experience with the supercomputing maker.

Four of the six DoD supercomputing centers already own Cray gear, including the US Army Research Laboratory (two XT5s), the Arctic Region Supercomputing Center (one XT5), the Army’s Engineer Research and Development Center (one Cray XT4), and the Navy’s DoD Supercomputing Resource Center (one XT5). By adding the Air Force Research Lab in the new contract, that puts Cray machinery in five of the six DoD centers.

Whether Ungaro and company are able to book any of the $45 million in 2010 will depend upon getting the Baker systems launched on time. As we reported last week, the Bakers are due out in the third quarter of this year. According to Cray, the development of the next-generation XT system is currently on schedule.

]]>http://www.hpcwire.com/2010/02/25/cray_corrals_big_defense_deal/feed/0Arctic Region Supercomputing Center Gets Cold Shoulder from DoDhttp://www.hpcwire.com/2010/01/22/cir_report_states_that_40_100_gige_transceiver_markets_to_reach_545m_by_2014/?utm_source=rss&utm_medium=rss&utm_campaign=cir_report_states_that_40_100_gige_transceiver_markets_to_reach_545m_by_2014
http://www.hpcwire.com/2010/01/22/cir_report_states_that_40_100_gige_transceiver_markets_to_reach_545m_by_2014/#commentsFri, 22 Jan 2010 08:00:00 +0000http://www.hpcwire.com/?p=5542At a time when supercomputing centers seem to be multiplying across the US, the one up in Alaska looks like it could become an endangered species. The Arctic Region Supercomputing Center is slated to lose its Department of Defense funding at the end of May 2011, putting the jobs of nearly 50 employees in jeopardy and shrinking the scope of the work done at the northernmost HPC facility in the United States.

]]>At a time when supercomputing centers seem to be multiplying across the US, the one up in Alaska looks like it could become an endangered species. The Arctic Region Supercomputing Center (ARSC) is slated to lose its Department of Defense (DoD) funding at the end of May 2011, putting the jobs of nearly 50 employees in jeopardy and shrinking the scope of the work done at the northernmost HPC facility in the United States.

Fairbanks-based ARSC is a dual-purpose supercomputing center, serving researchers at the University of Alaska-Fairbanks (UAF) as well as the DoD’s High Performance Computing Modernization Program (HPCMP). This two-pronged mission has been in effect since the center was inaugurated in 1993, and has given the university access to some world-class supercomputing machinery.

ARSC is currently one of six HPCMP centers, the others being the Army Research Laboratory DSRC at Aberdeen Providing Ground, in Maryland; the Air Force Research Laboratory DSRC at Wright Patterson AFB, in Ohio; the Maui High Performance Computer Center in Kihei, Maui, Hawaii, the Army Engineer Research and Development Center DSRC in Vicksburg, in Mississippi and the Navy DoD Supercomputing Resource Center at Stennis Space Center, also in Mississippi.

A pre-Thanksgiving email to ARSC confirmed what many at the center had suspected, namely that the center would lose its DoD funding after the current money expires next May. Today the center is funded to the tune of $12 to $15 million, and the DoD slice represents around 95 percent of the total.

According to ARSC director Frank Williams, they’ve been looking to move the UAF academic work off the DoD HPC platforms for the past couple of years, and that process is now complete. “We came of age just in time,” he told HPCwire.

That academic work was transferred to the recently deployed “pacman” system, an AMD Opteron-based HPC cluster from Penguin Computing. Funding for this system came from a number of NSF grants (one of which was named Pacific Area Climate Monitoring and Analysis Network, or PACMAN for short). The machine was procured explicitly for the academic users at UAF, and is being used to support a range of Arctic-oriented scientific research, including studies of climate change, ocean circulation, permafrost, tsunamis, and regional weather patterns.

The pacman system is actually the synthesis of three separate procurements, which were subsequently consolidated into a single cluster. The combined machine encompasses more than 2,000 CPU cores made up of AMD’s latest Magny-Cours Opterons. There are also a couple of NVIDIA Fermi GPU-equipped nodes on pacman, which plays into the university’s research with GPGPU computing. The UAF researchers are happy to have a recent vintage machine devoted entirely to their work. “It’s pretty skookum,” said Williams, employing the Alaskan slang for something really cool or excellent.

Some maintenance and operational support for pacman was included with the original NSF funding, and the center is now working with the university to augment that beyond the end of the DoD money. In fact, UAF university is on the hook to pick up the entire operational budget of the datacenter, something the university is prepared to do, according to Williams.

The hard part will be to figure out a way to transition the people that were dependent on DoD work to the academic side. Given that most of the 50 or so ARSC employees were being funded out of the HPCMP money, that will be quite a challenge.

As far as DoD systems themselves, the majority being housed at ARSC are smaller test and development systems. The center’s main production machine is Chugach, a Cray XE6 ‘Baker’ supercomputer, which was part of a recent big procurement under HPCMP. That system has been moved to Vicksburg center and is being run remotely. Chugach’s predecessor, an XT5 machine named Pingo, is also in production, but as soon as Chugach completes acceptance testing (which is imminent), Pingo will be retired.

Starting next June, ARSC will be forced into the more traditional path of a university-based HPC center, using mainly NSF and local funding to keep its systems up and running. Williams is glad to see the UAF administration stepping up to fill some of the void left by the DoD’s exit, but it remains to be seen how smooth that transition is going to be. “We really have the hardware to support academic high performance computing research,” says Williams. Now it’s just a matter of making sure we can find a way to have enough staff to support it. We don’t aspire to be a $15 million academic center at this point.”