NASA – HPCwirehttps://www.hpcwire.com
Since 1987 - Covering the Fastest Computers in the World and the People Who Run ThemThu, 22 Feb 2018 00:27:19 +0000en-UShourly1https://wordpress.org/?v=4.9.460365857Glimpses of Today’s Total Solar Eclipsehttps://www.hpcwire.com/2017/08/21/glimpses-todays-total-solar-eclipse/?utm_source=rss&utm_medium=rss&utm_campaign=glimpses-todays-total-solar-eclipse
https://www.hpcwire.com/2017/08/21/glimpses-todays-total-solar-eclipse/#respondMon, 21 Aug 2017 20:19:51 +0000https://www.hpcwire.com/?p=39045Here are a few arresting images posted by NASA of today’s total solar eclipse. Such astronomical events have always captured our imagination and it’s not hard to understand why such occurrences were often greeted with fear and seen as harbingers of evil before their true nature was understood. 1. This full-disk geocolor image from GOES-16 shows […]

]]>Here are a few arresting images posted by NASA of today’s total solar eclipse. Such astronomical events have always captured our imagination and it’s not hard to understand why such occurrences were often greeted with fear and seen as harbingers of evil before their true nature was understood.

1. This full-disk geocolor image from GOES-16 shows the shadow of the moon covering a large portion of the northwestern U.S. earlier today, August 21, 2017.

2. This composite image, made from seven frames, shows the International Space Station, with a crew of six onboard, as it transits the Sun at roughly five miles per second during a partial solar eclipse, Monday, Aug. 21, 2017 near Banner, Wyoming.

Photo Credit: (NASA/Joel Kowsky)

3. On August 21, 2017, the Earth will cross the shadow of the moon, creating a total solar eclipse. Eclipses happen about every six months, but this one is special. For the first time in almost 40 years, the path of the moon’s shadow passes through the continental United States. This is a visualization of the event.

4. The Moon is seen passing in front of the Sun during a solar eclipse from Ross Lake, Northern Cascades National Park, Washington on Monday, Aug. 21, 2017.

Photo Credit: (NASA/Bill Ingalls)

6. The Moon is seen passing in front of the Sun at the point of the maximum of the partial solar eclipse near Banner, Wyoming on Monday, Aug. 21, 2017. A total solar eclipse swept across a narrow portion of the contiguous United States from Lincoln Beach, Oregon to Charleston, South Carolina. A partial solar eclipse was visible across the entire North American continent along with parts of South America, Africa, and Europe.

Photo Credit: (NASA/Joel Kowsky)

5. Once again. the total solar eclipse is seen on Monday, August 21, 2017 above Madras, Oregon.

]]>https://www.hpcwire.com/2017/08/21/glimpses-todays-total-solar-eclipse/feed/039045HPE Ships Supercomputer to Space Station, Final Destination Marshttps://www.hpcwire.com/2017/08/14/hpe-ships-supercomputer-space-station-final-destination-mars/?utm_source=rss&utm_medium=rss&utm_campaign=hpe-ships-supercomputer-space-station-final-destination-mars
https://www.hpcwire.com/2017/08/14/hpe-ships-supercomputer-space-station-final-destination-mars/#respondMon, 14 Aug 2017 21:37:11 +0000https://www.hpcwire.com/?p=38810With a manned mission to Mars on the horizon, the demand for space-based supercomputing is at hand. Today HPE and NASA sent the first off-the-shelf HPC system into space aboard the SpaceX Dragon Spacecraft to explore if such a system, equipped with purpose-built software from HPE, can operate successfully under harsh environmental conditions that include […]

]]>With a manned mission to Mars on the horizon, the demand for space-based supercomputing is at hand. Today HPE and NASA sent the first off-the-shelf HPC system into space aboard the SpaceX Dragon Spacecraft to explore if such a system, equipped with purpose-built software from HPE, can operate successfully under harsh environmental conditions that include radiation, solar flares, and unstable electrical power.

Currently ruggedizing space-bound computers is a years-long process, so that by the time they blast off they are three to four generations behind the current state of the art. HPE has designed its new system software to mitigate environmentally induced errors using real-time adaptive throttling techniques. If successful, it would mean that space travelers need not go through the extensive hardening process for their computers and would benefit from the latest technologies.

After this morning’s launch from NASA’s Kennedy Space Center (Merritt Island, Florida), the Spaceborne Computer is headed to the International Space Station (ISS) for one year, which is about how long it takes to get to Mars.

“Our vision is to have a general purpose HPC supercomputer on board the space craft,” said Dr. Mark Fernandez, leading payload engineer for the project. “Today, all of the experiments must send the data to earth over the precious network bandwidth and this opens up the opportunity to what we’ve been talking a lot about lately, which is bring the compute to the data rather than bring the data to the compute.”

As one considers the latency and bandwidth issues of space travel, the advantage of on-board HPC is clear. The average round trip signal as you get close to Mars is 26 minutes. With this delay, it’s hard to have a conversation over this network much less carry out complex computational tasks. “When you need on the spot computation, for simulation, analytics, artificial intelligence, the answers tends to get a bit too long to come by if you rely on earth so more and more as you travel further and further out you need to carry more compute power with you – this is our belief,” said Dr. Eng Lim Goh, VP, Chief Technology Officer of SGI at HPE and one of the inventors of the approach.

Ultimately HPE is positioning itself to provide its memory-based The Machine supercomputer for Mars exploration.

Sending people to Mars opens up enormous computing demands. They will need to be “guided by a computer capable of performing extraordinary tasks,” writes Kirk Bresniker, Chief Architect, Hewlett Packard labs. These include:

Monitoring onboard systems the way a smart city would monitor itself—anticipating and addressing problems before they threaten the mission.

Tracking minute-by-minute changes in astronaut health—monitoring vitals and personalizing treatments to fit the exact need in the exact moment.

Coordinating every terrestrial, deep space, Martian orbital and rover sensor available, so crew and craft can react to changing conditions in real time.

And, perhaps most importantly… combining these data sets to find the hidden correlations that can keep a mission and crew alive.

“Memory-Driven Computing will help us efficiently and effectively tackle the big data challenges of our day, and make it possible for us to—one day—send humans to Mars,” asserts Goh. “But even if we expect Memory-Driven Computing to become the standard for supercomputing in space we need to start somewhere.”

To that end, the phase one Spaceborne Computer includes two x86 HPE Apollo 40-class two-socket systems, powered by Broadwell processors. These are the latest generation Xeons at the point the configuration was frozen by NASA in March ahead of shipment.

The InfiniBand-connected Linux cluster will be housed in a standard NASA dimension locker, equipped with standard Ethernet cables, standard 110 volt AC connectors and NASA-approved cooling technology. The rack design means the system can be easily swapped out for an upgraded model. No modifications were made to the main components, but HPE created a custom water-cooled enclosure that taps into a cooling loop on the space station, leveraging the free ambient cooling of space.

During the year spent circling Earth’s orbit, the computer will run three HPC benchmarks, each of which targets a different kind of computational workload: the compute and power-hungry Linpack, the data intensive HPCG and a benchmark suite from NASA, the NAS parallel benchmark.

“We selected these for relevance, to be as realistic as possible for NASA and space related work,” said Goh.

HPE designed the entire experiment so that testing can run autonomously. “It doesn’t require the astronauts to be system engineers,” said Goh. “They just need to plug the system in and turn it on and the experiments will run automatically.”

The tests will generate approximately 5 megabytes of data per day that will be sent to HPE for analysis. There’s also the capability for an uplink that would give cleared HPE team members limited access to the system, but the plan is to run autonomously other than the regular downloads of data, which will be compared with a control machine in Chippewa Falls, Wisconsin.

Through its SGI acquisition, HPE has a relationship with NASA that extends back 30 years. The Spaceborne Computer “Apollo 40” compute nodes are the same class as those used in the NASA’s flagship Pleiades supercomputer, an SGI ICE X machine that is ranked at number 15 on the current Top500 list.

]]>https://www.hpcwire.com/2017/08/14/hpe-ships-supercomputer-space-station-final-destination-mars/feed/038810Sub-Sahara Tree Survey Finds Success with AWS, Hybrid Cloudhttps://www.hpcwire.com/2016/08/04/aws-transforms-satellite-imagery-biomass-survey/?utm_source=rss&utm_medium=rss&utm_campaign=aws-transforms-satellite-imagery-biomass-survey
https://www.hpcwire.com/2016/08/04/aws-transforms-satellite-imagery-biomass-survey/#respondThu, 04 Aug 2016 19:27:48 +0000https://www.hpcwire.com/?p=29146Vegetation is such an important part of climate health that scientists are working to perfect biomass measurement techniques. A region of particular interest is a coast-to-coast band of sub-Sahara Africa that is home to an estimated one billion trees and shrubs. Scientists from the University of Minnesota and the NASA Center for Climate Simulation (NCCS) were able to arrive at this figure using a combination of NASA resources and commercial cloud technologies to process satellite image data.

]]>Did you know that there are more trees on Planet Earth than stars in the Milky Way? Scientists estimate that there are some three trillion trees on terra firma, compared to 100 billion stars in our home galaxy. Trees act as the lungs of the planet, absorbing carbon dioxide and releasing oxygen. When trees burn or decompose, they release carbon stores back into the atmosphere with potentially significant climate impact.

Vegetation is such an important part of climate health that scientists are working to perfect biomass measurement techniques. A region of particular interest is a coast-to-coast band of Sub-Sahara Africa (see figure below) that is home to an estimated one billion trees and shrubs. Scientists from the University of Minnesota and the NASA Center for Climate Simulation (NCCS) were able to arrive at this figure using a combination of NASA resources and commercial cloud technologies to process satellite image data.

The Sub-Sahara region of Africa covers Universal Transverse Mercator (UTM) zones 28 through 38. At nearly 10 million square kilometers the study area is larger than the continental United States. Image by Katherine Melocik, GSFC. Source: NCCS

The scope of processing and analyzing all the satellite data for this massive 10 million square kilometer region of Africa puts it in the category of data-intensive science. The scale and significance of the problem led to Intel and Amazon Web Services offering their resources as part of Intel’s Head in the Clouds Challenge.

The team started with 260,000 satellite images, comprising 200 terabytes of data. The images capture the entire canvas of Sub-Sahara’s 11 Universal Transverse Mercator (UTM) zones, each about 877,000 square km.

Before the data could be sent to the cloud, it needed to be cleaned up and divided into smaller parts to accommodate parallel processing. Using the NCCS private cloud – the Advanced Data Analytics Platform (ADAPT) – the team was able to reduce the original image set by more than 60 percent by carefully removing extraneous elements. These orthorectified images were then deconstructed into 25-km x 25-km sub-tiles. The process is detailed by Jarrett Cohen of NASA Goddard Space Flight Center in a recent report on the project.

“[T]he researchers use ADAPT to stack all available satellite images, put them into the same map projection, select the best images, cut out overlap, and color-calibrate and resample the images to a consistent 50-centimeter resolution,” writes Cohen. “These steps reduce the number of satellite images to 100,000 across all UTM zones. The team deconstructs each UTM into 100- by 100-km tiles and then 25- by 25-km sub-tiles called mosaics that go to AWS for the biomass estimate calculations.”

ADAPT is equipped with “fat” nodes, containing 256 gigabytes of RAM and 6 terabytes of flash, to satisfy data intensive I/O requirements. Further, the cloud’s link into the AWS East facility was upgraded from 1 to 10 gigabits per second to accommodate more robust data transfers. Cloud management software from Cycle Computing handled job submission and kept data streaming smoothly.

About 5,000 AWS cores were utilized to process the sub-tile mosaics. “Team-optimized algorithms identify and count the number of trees and shrubs, measure crown area, and, from the shadows cast, determine tree height,” explains Cohen. The result is a biomass estimate for each mosaic. The sub-tile mosaics are then stitched back together in such a way that there is one larger mosaic representation for each UTM zone.

ADAPT has commitments to a number of big projects; a hybrid cloud model leverages a production cloud, such as AWS, to secure additional capacity in a timely manner. In a follow-on project, analyzing black-and white image data for the entire Sub-Sahara region, the team used AWS Spot Instances to process 43 terabytes of data in 72 hours for less than $2,000. Similarly-sized jobs would have taken months to complete on the resource-constrained ADAPT cloud, according to one of the team members.

Using these tools and resources, the researchers say they can boost the accuracy of their biomass census even further, while reducing their overhead. The effort is still underway. Project leads include NASA High-End Computing Program Manager Tsengdar Lee; Compton Tucker, NASA Goddard Space Flight Center (GSFC) Earth scientist; and University of Minnesota’s Paul Morin.

]]>https://www.hpcwire.com/2016/08/04/aws-transforms-satellite-imagery-biomass-survey/feed/029146HPC User Forum Presses NSCI Panelists on Planshttps://www.hpcwire.com/2015/09/17/hpc-user-forum-presses-nsci-panelists-on-plans/?utm_source=rss&utm_medium=rss&utm_campaign=hpc-user-forum-presses-nsci-panelists-on-plans
https://www.hpcwire.com/2015/09/17/hpc-user-forum-presses-nsci-panelists-on-plans/#respondThu, 17 Sep 2015 13:48:45 +0000http://www.hpcwire.com/?p=21201In less than two months, the National Strategic Computing Initiative (NSCI) Executive Council must present its implementation plan. Just what that will look like remains a mystery but budgets, governance, and private-public partnering models were on the minds of attendees to last week’s HPC User Forum in Broomfield, CO, where the first public panels with […]

]]>In less than two months, the National Strategic Computing Initiative (NSCI) Executive Council must present its implementation plan. Just what that will look like remains a mystery but budgets, governance, and private-public partnering models were on the minds of attendees to last week’s HPC User Forum in Broomfield, CO, where the first public panels with NSCI agencies offered a wide-ranging glimpse into agency thinking. It was also a chance for HPC industry execs to press for more details.

At the moment industry enthusiasm for NCSI is high and panelists strove to reinforce that goodwill and reassure attendees that disabling missteps could be avoided. On technology issues – despite differences around the edges – there was broad agreement among the panelists and attendees on problems needing solving, with large-scale data analytics as the new but important kid on the block. Governance and collaboration challenges drew a more wary response from the audience, although panelists insisted conflicts would be amicably and equitably dealt with.

In the end conversation around non-technology issues was the most revealing with many attendees wondering what potential obstacles the NSCI panelists foresaw. Funding, perhaps not surprisingly, was a touchy topic particularly given the many calls that NSCI should emulate the U.S. Apollo program, which galvanized public opinion and loosened federal purse strings.

Playing devil’s advocate, Barry Bolding, Cray (NASDAQ: CRAY) senior vice president and chief strategy officer, said to panelists during Q&A, “I’d like to push the panel in the direction of pitfalls a little bit and hear about things you think could be gotchas. You’ve mentioned the space program a few times and how this might be a corollary and one can look at the space program and all of us agree it benefited the country a great deal. [But] one can also look at it and say, oh it’s been 45 years and we’re only beginning to have a vision for unmanned space program and only beginning to get private industry into space programs. So it wasn’t very successful.”

Put another way, what did we get for the money and given the times, can we get those kinds of budgets going forward.

Randy Bryant, OSTP

This drew a pragmatic response from OSTP representative Randy Bryant: “It’s hard to get significant federal funding output [now]. Flat is the new normal; that’s true across the entire research budget and I see that as a core problem. The Apollo program was a great program but it consumed a significant fraction of the U.S. GDP. It was a huge investment, and I don’t anticipate in our current budget climate that would be possible,” he said. “It helped that there was an existential threat of the Soviet Union at the time. I don’t see anything that’s going to make us step up at that level.”

Rob Leland, one of the original organizers of the NSCI proposal and a representative from Sandia National Laboratories, countered by saying the real gotcha is to not sufficiently fund and execute NSCI.

“The US used to dominate investment in this space quite dramatically. In fact up until about 2010 or so U.S. investment was equal to the rest of the world combined and [it’s] now about a 1/3 of the total investment. More worrying is the disparity in growth rates. The U.S. growth rate in investment is about 2.5% [while] the average for the rest of the world that is engaged in this space is about 12 % and I think China is up to about 23%.

“If that disparity persists for five or 10 years we will not dominate this space technologically the way we have previously,” Leland said.

Robert Leland, Sandia NL

If that wasn’t compelling enough, Leland emphasized the “erosion of Moore’s Law” has upped the ante in HPC competition. “If we don’t rally effectively as a society around that challenge the technical path forward is very unclear. I think there are also good indicators we’re [also] coming to the end of the MPP era and so if we don’t make a transition to some new architecture approach, I think we will be on a path of less relevance.”

In many ways, the event marked the beginning of NSCI’s public outreach to industry in a program, which among other things, is designed to energize public-private partnering for the good of both. There were two panels: 1) US Plans for Advancing HPC: Potential Implications of the White House Executive Order and NSCI, and 2) Open Forum Discussion and Q&A of the NSCI Plans and Directions.

As a rule, panelists directed their comments to one or another of NSCI’s five strategic objectives excerpted from the Executive Order here:

Accelerating delivery of a capable exascale computing system that integrates hardware and software capability to deliver approximately 100 times the performance of current 10 petaflop systems across a range of applications representing government needs.

Increasing coherence between the technology base used for modeling and simulation and that used for data analytic computing.

Establishing, over the next 15 years, a viable path forward for future HPC systems even after the limits of current semiconductor technology are reached (the “post- Moore’s Law era”).

Increasing the capacity and capability of an enduring national HPC ecosystem by employing a holistic approach that addresses relevant factors such as networking technology, workflow, downward scaling, foundational algorithms and software, accessibility, and workforce development.

Developing an enduring public-private collaboration to ensure that the benefits of the research and development advances are, to the greatest extent, shared between the United States Government and industrial and academic sectors.

Not surprisingly panelists’ comments largely reflected their specific agency missions – this was helpful in making clear there are some differing priorities. To a large extent the list of technology issues tackled was very familiar to anyone in HPC: the end of Moore’s Law; death of single thread performance; power management; a need for higher fidelity models; the flood of data from scientific and other instruments and sensors; code modernization; and future computing (quantum, neuromorphic, et. al). You get the picture.

The problems are plentiful and solutions scarce, but that’s kind of the point of NSCI. Big Data and code modernization took somewhat center stage. Here are a few examples:

Kothe (ORNL) pushed the importance of codesign citing ongoing work: “In DOE there are three of those centers and I know the NNSA labs are heavily involved in that activity. In the ECI (Exascale Computing Initiative, DOE, and being somewhat subsumed into NSCI) project we see that activity continuing and growing. It’s critical because we are looking at some fairly substantial challenges at least on the applications side, the scariest are the deep memory hierarchies, probably more so than hybrid floating point.”

Mehrotra (NASA) locked in on big data issues: “We are very interested in the convergence of data analytics with HPC. Our satellites produce petabytes of data every year streaming down. This is observational data. How do we handle that data, how do we manage that data, and then how do we actually extract any knowledge out of that in conjunction with not just observational data [but with] model data. We are very concerned about how to bring the two environments together so that we can do quantitative simulation along with large-scale data analytics.”

Still (LLNL) sounded a familiar note on code modernization, “You’ve heard the acronym IC used for Intelligence Community, we used it in a slightly different way for integrated code. The gist is our IC [effort] is multi-million lines of work across the three labs within the NNSA; it’s kind of a $6B investment in code. We can’t rewrite it overnight and take advantage of each new architecture that shows up because the codes are decade-old type codes. We have to make modifications or reengineer one and revalidate. So performance portability is an absolute key inside the NCSI. We are all about trying to make useable machines. That is a key component as far as we’re concerned.”

Baker (PNNL) offered: “The amount of data that comes off a big instrument is too high a bandwidth to even write out to a box. So you’ve got a ‘baby and bathwater’ conundrum. We spend billion of dollars looking for rare particles and yet the data is coming out at a rate that we may have to triage, we may lose what you’re looking for. How do you design robust algorithms that can handle that? How do you design algorithms that can detect what you need to detect and although you’d love to keep all the data, triage what you have to triage?”

Transferring NSCI-generated technology advances to industry got perhaps less shrift than one would think. Kramer (NCSA) strongly suggested that HPC-as-a-service must be a necessary component of any realistic approach to induce widespread use of HPC technology by most of industry.

“We’re talking about the management of IP and partnerships and relationships very seriously,” said Koethe (ORNL) “and that scope is probably not as deep and broad as it should be. I neglected to show our structure which call for councils – industry council, science council, board of directors. Not that boxology fixes everything but I think at least we’re implementing lessons learned and best practices from past projects.”

Attendees dedicated a fair amount of discussion to coordination challenges within the NSCI organization framework. Competition among government agencies for power and budget is hardly rare. Diverse suggestions ranging from close coordination, loose coordination, sole agency lead, collective agency lead and others were all raised at some point.

Irene Qualters, NSF

Qualters (NSF) said simply, “I think this is a very aggressive program and there’s not one path forward. I think one has to be careful. One wants a fair amount of innovation at this stage and diversity. So there can be coordination but I think the [idea] that you just have everyone marching in one line is wrong too.”

What came through is a hunger for a good model for success. This is a sprawling program – and those have been tackled before (e.g. Large Hadron Collider) and produced many lessons. One audience member looked back to the rural electrification project in the late 1930s as a good model, particularly from a public-private partnership perspective. It was long-lived and worked. The HPC initiative (High Performance Computing Act 1991) in the early 90s seemed to be the most favored.

As an early architect of the NSCI directive, Leland offered this: “I think there is an excellent analog in the HPC initiative in the early 1990s that is generally viewed as quite successful and I think we can hope to replicate that success. [If] you look at history I think each major new era in computing has been preceded by 5-7 years by a forward looking investment by the government in R&D. [The pattern can be] traced that back at least five cycles. I think that can be true again here. There are many indicators that we are approaching a wall and need to make a substantial jump in our capabilities and a change in our approach. I think all the preconditions are here for us to replicate that history once again with a sixth cycle.”

It will be interesting to see how opinions shift once the implementation plan comes out.

]]>https://www.hpcwire.com/2015/09/17/hpc-user-forum-presses-nsci-panelists-on-plans/feed/021201Unlocking the Mysteries of Spacehttps://www.hpcwire.com/2015/06/16/simulations-unlock-the-mysteries-of-space/?utm_source=rss&utm_medium=rss&utm_campaign=simulations-unlock-the-mysteries-of-space
https://www.hpcwire.com/2015/06/16/simulations-unlock-the-mysteries-of-space/#respondTue, 16 Jun 2015 19:58:19 +0000http://www.hpcwire.com/?p=18941Some of the most powerful supercomputers in the world are helping NASA scientists reveal the mysteries of the universe. The intensive discovery process would not be possible without the modeling and simulation capabilities of high-performance computers, like Pleiades, which is located at the NASA Advanced Supercomputing (NAS) facility at Ames Research Center, and Titan, the fastest […]

]]>Some of the most powerful supercomputers in the world are helping NASA scientists reveal the mysteries of the universe. The intensive discovery process would not be possible without the modeling and simulation capabilities of high-performance computers, like Pleiades, which is located at the NASA Advanced Supercomputing (NAS) facility at Ames Research Center, and Titan, the fastest US system, operated by the Department of Energy. Here are two projects that showcase the essential role of HPC in understanding space phenomena, along with remarkable renderings.

#1 — Deconstructing “Hot Jupiters”

The first depicts a night-side view of magnetic field lines in a simulation of a “hot Jupiter” exoplanet. The moniker is applied to planets that are of similar size to Jupiter but are much closer to their host stars. Astrophysicist Tamara Rogers and her team at the University of Arizona’s Lunar and Planetary Laboratory Simulations conducted these simulations to better understand the planets’ inner workings and how they formed. The simulations — run on the Pleiades supercomputer — were the first to include magnetic fields. Simulations projects like these are crucial for making sense of the observational data extracted from space-based instruments.

“By studying hot Jupiters, so different from the gas giants that slowly circle our own Sun, astronomers are expanding their knowledge of planetary structure and evolution—research that is crucial to the search for rocky, Earth-like exoplanets that may support life,” writes Michelle Moyer with NASA Ames Research Center.

The second impressive rendering illustrates the formation of “magnetic flux ropes” within the reconnection layer of the earth’s magnetosphere. In support of the NASA Magnetospheric Multiscale Mission, a multi-institution research team led by William Daughton of Los Alamos National Laboratory, is using Titan to study magnetic reconnection, a phenomenon associated with space weather that occurs when charged particles interact strongly with magnetic fields.

Daughton and his colleagues have been simulating this process for five years, using the Cray XT5 Jaguar supercomputer and then its successor, the Cray XK7 Titan supercomputer, at the Oak Ridge Leadership Computing Facility (OLCF). NASA scientists compare the simulations with experimental data obtained from the Magnetospheric Multiscale (MMS) Mission.

Speaking to the importance of being able to run larger simulations, Daughton said that the 10X-50X improvement offered by the next-generation of supercomputers will expand the class of problems that scientists can solve. OLCF is on track to get this computational power boost when Summit comes online in 2018. Still Daughton notes that “for these really huge runs, people are going to have to move to some in situ analysis and visualization because they just won’t be able to save everything.”

]]>https://www.hpcwire.com/2015/06/16/simulations-unlock-the-mysteries-of-space/feed/018941NASA’s Earth Science Supercomputer Balloons to 3.3 Petaflopshttps://www.hpcwire.com/2015/04/29/nasas-earth-science-supercomputer-balloons-to-3-3-petaflops/?utm_source=rss&utm_medium=rss&utm_campaign=nasas-earth-science-supercomputer-balloons-to-3-3-petaflops
https://www.hpcwire.com/2015/04/29/nasas-earth-science-supercomputer-balloons-to-3-3-petaflops/#respondWed, 29 Apr 2015 20:32:05 +0000http://www.hpcwire.com/?p=18405In what is being called an unprecedented upgrade, the NASA Center for Climate Simulation (NCCS) is tripling the peak performance of its Discover supercomputer to more than 3.3 petaflops to power NASA’s Earth science modeling efforts. The open procurement process included the benchmarking of NCCS codes – notably the Goddard Earth Observing System Model, Version […]

]]>In what is being called an unprecedented upgrade, the NASA Center for Climate Simulation (NCCS) is tripling the peak performance of its Discover supercomputer to more than 3.3 petaflops to power NASA’s Earth science modeling efforts.

The open procurement process included the benchmarking of NCCS codes – notably the Goddard Earth Observing System Model, Version 5 (GEOS-5) and the NASA Unified-Weather and Research Forecasting (NU-WRF) Model. Based on performance and value criteria, SGI was selected to provide Rackable clusters, outfitted with 14-core Intel E5-2697v3 “Haswell” processors.

In this photo taken several months ago partly through the upgrade, the NCCS supercomputer had 45,600 processor-cores and a peak speed of 1.995 petaflops. The visible machine “skins” depict the observed and simulated images of Hurricane Sandy. The Discover supercomputer’s new SGI Rackable clusters will house a total of 64,512 processor cores. Credit: Photo by NASA/Goddard/Bill Hrybyk.

NCCS is in the process of installing the SGI Rackable hardware as three Scalable Compute Units (SCUs 10,11 and 12), which combined offer a total of 64,512 processor cores.

Discover – which derives its name from the NASA adage of “Explore. Discover. Understand.” – is comprised of multiple Linux scalable units built with commodity components. The first scalable Discover unit was installed in the fall of 2006 and there have been several upgrades since that time. The new clusters are replacing portions of Discover dating from 2011.

In its current form, the aggregate of Discover’s individual scalable units (SCUs 8, 9, 10 and 11) is 67 racks, incorporating 62,400 total cores, providing 2.678 petaflops of compute power. SCU10 achieved general availability in January, and SCU11 is currently in pioneer user mode. SCU12 is scheduled to arrive in late May.

NCCS describes the three stages that lead up to a successful deployment on their website, going into detail about the role of the vendor, the NCCS system administrators and benchmarking team, and the power users who put the system through its paces. One successful test involved running an ultra-high-resolution GEOS-5 simulation on the entire SC10 cluster.

NASA’s Discover system administrator Mike Donovan observes that while a typical NCCS installation pace is one SCU per year, they are on track to stand up three SCUs in seven months. The effort requires close coordination among the NCCS technical and facilities staff and the computer vendor. The replacement of SCUs must be carefully timed in order to limit disruptions.

“We want to have the old hardware out at least a week beforehand,” said Bruce Pfaff, who leads Discover’s system administration team. “But we also want to maximize the amount of time users have with the old system and minimize the period of limited resources during the installation.”

Planning for the overhaul meant accounting for 1 megawatt of power and 400 tons of cooling equipment. Racks must be factory-configured for optimal onsite operations, and NCCS also acquired 10 nodes for its Test and Development System (TDS). There’s also the matter of scrubbing data from the old hardware as part of the decommissioning process.

A highlight of the new SGI clusters is the fully non-blocking interconnect fabric, where each 28-core node can communicate directly with every other node via FDR InfiniBand rated at 56 gigabits per second. The enhancements are being driven by the science workloads, which continue to push the compute and I/O envelope. Data volumes are also rising and a high-resolution simulation at NCCS can generate several petabytes of data. To ensure sufficient storage space, NCCS is more than doubling Discover’s online disk capacity from 12.4 to 33 petabytes.

]]>https://www.hpcwire.com/2015/04/29/nasas-earth-science-supercomputer-balloons-to-3-3-petaflops/feed/018405NASA Supercomputer Intensifies Exomoon Searchhttps://www.hpcwire.com/2015/02/11/nasa-supercomputer-intensifies-exomoon-search/?utm_source=rss&utm_medium=rss&utm_campaign=nasa-supercomputer-intensifies-exomoon-search
https://www.hpcwire.com/2015/02/11/nasa-supercomputer-intensifies-exomoon-search/#respondThu, 12 Feb 2015 01:24:37 +0000http://www.hpcwire.com/?p=17376Aliens – they aren’t just the cornerstone of science fiction; they’re at the center of a key question for astronomers and philosophers alike: “Are we alone in the universe?” Recently, NASA has made strides toward answering this question through its search for potentially habitable planets beyond our solar system. Not only could finding such a […]

]]>Aliens – they aren’t just the cornerstone of science fiction; they’re at the center of a key question for astronomers and philosophers alike: “Are we alone in the universe?”

Recently, NASA has made strides toward answering this question through its search for potentially habitable planets beyond our solar system. Not only could finding such a planet teach us about what helped a planet as unique as Earth come to be, but it could potentially reinforce or completely topple our understanding of life.

But planets aren’t the only celestial bodies that offer so much potential, which is why a team based at Harvard University has launched the Hunt for Exomoons (HEK) project to find moons that might support life.

For sci-fi movie buffs, the search for habitable moons should come as no surprise, thanks to fictional worlds such as Star Wars’ Forest Moon of Endor, or Avatar’s Pandora.

So why have we only just started the search? Exomoons, or moons outside of our solar system, are so small that they’re very difficult to find, even for NASA’s specially designed space observatory Kepler.

As a result the HEK team is marrying the Kepler telescope to the power of NASA’s SGI ICE supercomputer, Pleiades, to sort through the data and possibly uncover some habitable moons along the way.

Led by David Kipping of the Harvard-Smithsonian Center for Astrophysics, the astronomers developed a unique computational method based on an in-house LUNA light curve modeling algorithm and a massively parallel sampling algorithm called MultiNest. The combination of algorithms facilitated the simulation of billions of alignments between stars, planets and moons. The team compares these findings with Kepler data to identify any matches.

Already Kepler has found approximately 400 possible exomoon candidates. And while Kipping and his team have investigated 56 of these, surveying the remaining 340 over the next two years will consume roughly 10 million processor hours on Pleiades. The petascale supercomputer is a key enabler for the project, which would have taken nearly a decade to complete on smaller machines, according to the research brief.

Beyond the quest for celestial bodies that could be habitable, the investigation will provide the additional benefit of giving scientists a much greater sense of how frequently moons appear in our galaxy.

]]>https://www.hpcwire.com/2015/02/11/nasa-supercomputer-intensifies-exomoon-search/feed/017376NASA Discovers Eight New Planets in ‘Goldilocks’ Zonehttps://www.hpcwire.com/2015/01/08/nasa-discovers-eight-new-planets-goldilocks-zone/?utm_source=rss&utm_medium=rss&utm_campaign=nasa-discovers-eight-new-planets-goldilocks-zone
https://www.hpcwire.com/2015/01/08/nasa-discovers-eight-new-planets-goldilocks-zone/#respondThu, 08 Jan 2015 20:18:12 +0000http://www.hpcwire.com/?p=16934This week, astronomers announced they’ve found eight new planets that could be “just right” for supporting human life. These planets, found in the “Goldilocks” zone of their respective stars, are of so much interest because their orbits put them at a distance where liquid water, the basis of life on Earth, could naturally occur. While […]

]]>This week, astronomers announced they’ve found eight new planets that could be “just right” for supporting human life. These planets, found in the “Goldilocks” zone of their respective stars, are of so much interest because their orbits put them at a distance where liquid water, the basis of life on Earth, could naturally occur.

While eight planets might not seem like much, the discovery doubles the number of small planets considered to be potentially habitable. Among these eight, the team identified two that stand out as being more similar to Earth than any other exoplanet they’ve studied to date.

What sets the two planets – Kepler-438b and Kepler-442b – apart is the amount of sunlight they receive. Too much light and the water evaporates into steam. Too little and the water freezes.

Kepler-438b receives almost one and a half the amount of light we get on Earth, and as a result, researchers estimate it has a 70 percent chance of being in the habitable zone. Venus, by comparison, is the hottest planet in our home solar system with only twice the light that Earth gets.

Most exciting is Kepler-442b, which gets two-thirds as much light as Earth; it has a 97 percent chance of being in the Goldilocks zone.

“We don’t know for sure whether any of the planets in our sample are truly habitable,” says study co-author David Kipping in a press release for the Harvard-Smithsonian Center for Astrophysics (CfA). “All we can say is that they’re promising candidates.”

The team looked at planetary candidates first identified by NASA’s Kepler mission. Normally astronomers would confirm the bodies were planets by measuring their mass, but because the candidates were so small the team validated them by using a computer program called BLENDER – the same method responsible for some of Kepler’s most noteworthy discoveries, including the two planets found to be the same size as Earth that also orbit a Sun-like star.

BLENDER was developed by CfA’s Guillermo Torres and Francois Fressin, and runs on the Pleiades supercomputer at NASA Ames, which is currently ranked as the eleventh-fastest system on November’s TOP500 list. Pleiades sports a LINPACK rating of 3.38 petaflops and a peak performance of 4.49 petaflops following the addition of 15 SGI ICE X racks this past October. The ultimate goal of the SGI-NASA partnership is to push Pleiades’ peak capacity to 10 petaflops.

As for the BLENDER analysis, after its completion the team spent another year gathering follow-up data from high-resolution spectroscopy, adaptive optics imaging, and speckle interferometry to better understand the systems that BLENDER identified, but because the newly discovered planets are so far away, additional observations would be a challenge. Kepler-438b is found 470 light-years from Earth while Kepler-442b is 1,100 light-years away.

“Each result from the planet-hunting Kepler mission’s treasure trove of data takes us another step closer to answering the question of whether we are alone in the Universe,” NASA associate administrator John Grunsfeld reported in a press release. “The Kepler team and its science community continue to produce impressive results with the data from this venerable explorer.”

]]>https://www.hpcwire.com/2015/01/08/nasa-discovers-eight-new-planets-goldilocks-zone/feed/016934NASA Debuts Stunning CO2 Visualizationhttps://www.hpcwire.com/2014/11/25/nasa-debuts-stunning-co2-visualization/?utm_source=rss&utm_medium=rss&utm_campaign=nasa-debuts-stunning-co2-visualization
https://www.hpcwire.com/2014/11/25/nasa-debuts-stunning-co2-visualization/#respondWed, 26 Nov 2014 00:32:17 +0000http://www.hpcwire.com/?p=16568In keeping with the SC spirit of HPC matters, we wanted to share another amazing example of supercomputing in action. Last week, NASA officials released the first ever ultra-high-resolution computer model of global atmospheric carbon dioxide. The simulation, which can be seen below, depicts the puffs and swirls of carbon dioxide as it circumnavigates the globe. While we wrote about […]

]]>In keeping with the SC spirit of HPC matters, we wanted to share another amazing example of supercomputing in action. Last week, NASA officials released the first ever ultra-high-resolution computer model of global atmospheric carbon dioxide. The simulation, which can be seen below, depicts the puffs and swirls of carbon dioxide as it circumnavigates the globe. While we wrote about the hardware that enabled the project previously, the simulation and resultant visualization merit further attention.

Solid data for ground-level carbon dioxide measurements goes back decades, but it was only in July that NASA began tracking global space-based carbon levels, thanks to the Orbiting Carbon Observatory-2 (OCO-2) satellite, the first NASA satellite mission to provide a global view of carbon dioxide. The new computer model, called GEOS-5, was created by scientists at NASA Goddard’s Global Modeling and Assimilation Office. It runs at a resolution that is 64 times greater than that of typical global climate models. The resultant visualization, part of a simulation called “Nature Run,” brings this model to life in a way that is as breath-taking as it is shocking.

As explained by NASA, Nature Run is loaded with data on atmospheric conditions and the global greenhouse gas emission data from both natural and man-made sources. The full Nature Rum simulation covered two years, from May 2005 to June 2007, running on the NASA Center for Climate Simulation’s Discover supercomputer cluster at Goddard Space Flight Center. It produced nearly four petabytes of data and took 75 days to complete. The project is key to advancing scientific understanding of climate change and the behavior of carbon dioxide, which reached the critical 400 parts per million threshold this year.

“The visualization compresses one year of data into a few minutes,” narrates Bill Putman, lead scientist on the project from NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “Carbon dioxide is the most important greenhouse gas affected by human activity. About half of the carbon dioxide emitted from fossil fuel combustion remains in the air, while the other half is absorbed by natural land and ocean reservoirs.

“In the Northern hemisphere, we see the highest concentrations are focused around major emissions sources over North America, Europe and Asia. Notice how the gas doesn’t stay in one place, it’s controlled by large scale weather patterns within the global circulation. During spring and summer in the northern hemisphere, plants absorb a substantial amount of carbon through photosynthesis, thus removing some of the gas from the atmosphere. We see this change in the model as the red and purple colors begin to fade.”

“OCO-2 observations and atmospheric models like GEOS-5 will work closely together to better understand both human emissions and natural fluxes of carbon dioxide,” continues Putman. “This will help guide climate models toward more reliable predictions of future conditions across the globe.”

Aside from these stunning visualizations, NASA’s Goddard scientists are also releasing a robust version of Nature Run to the scientific community. Both the model and the visualization were demoed at the SC14 supercomputing conference last week in New Orleans.

]]>https://www.hpcwire.com/2014/11/25/nasa-debuts-stunning-co2-visualization/feed/016568Why HPC Mattershttps://www.hpcwire.com/2014/11/19/hpc-matters/?utm_source=rss&utm_medium=rss&utm_campaign=hpc-matters
https://www.hpcwire.com/2014/11/19/hpc-matters/#respondWed, 19 Nov 2014 17:02:49 +0000http://www.hpcwire.com/?p=16450When the SC14 show floor opened in New Orleans Monday night, signage everywhere proclaimed this year’s theme: HPC Matters. The new program, first announced at last year’s show in Denver, is all about broader engagement. It’s about getting the word out to the public – to policymakers, educators and regular people – about the role HPC […]

]]>When the SC14 show floor opened in New Orleans Monday night, signage everywhere proclaimed this year’s theme: HPC Matters. The new program, first announced at last year’s show in Denver, is all about broader engagement. It’s about getting the word out to the public – to policymakers, educators and regular people – about the role HPC plays in helping humanity solve its hardest problems. But it’s also a reminder for this sometimes niche-oriented community to take stock of their collective accomplishments.

HPC may be daunting to the uninitiated, but it’s touching ordinary folks in ways they may not even realize. To drive home this point, the SC committee welcomed Dr. Eng Lim Goh, senior vice president CTO of SGI, and Dr. Piyush Mehrotra, chief of the NASA Advanced Supercomputing (NAS) Division, to present at the very first HPC Matters plenary, in tandem with the event launch on Monday night. In keeping with the theme of community, the plenary was open and free to the general public.

In the words of HPC Matters Chair Wilfred Pinfold who spoke at the event, “the plenary session marks the start of a new communications campaign to highlight the extraordinary value that investments in computational simulation and modeling bring to every man, woman and child on the planet.”

The compelling presentation shed light on some of the most profound HPC use cases of our time, spanning nearly every aspect of society and ranging from advanced manufacturing to disaster warning systems to improving care for cancer patients. Dr. Goh makes the case that whether it’s meeting basic needs, reducing hardships, promoting industry or answering the profound questions of the universe, HPC is there.

Life on Earth

It is a basic fact that without potable water, people can’t survive. Despite over 70 percent of the earth’s surface being covered in water, 95 percent of it is off-limits for consumption due to its salt content. However, if you have limitless power, you can have a water desalinization plant on every coast, extracting salt from water and generating fresh water for all, says Dr. Goh.

Getting this unlimited power is the promise of fusion science, which despite decades of investment is not yet viable. ITER is one of the main projects working to create the energy of the sun on earth, and Goh believes they are getting close. Key to the technology is a magnetic confinement device, called a tokamak. Currently, the reaction inside the tokamak cannot survive long enough before it falls apart. To become tractable, the reaction will need to be sustained for days. It’s a turbulence problem that supercomputing is getting closer to solving.

Moving onto the field of health care, Goh recounts examples of how HPC is being used to diagnose and treat cancer, and also improve the safety of treatments. Researchers at Mass General Hospital at Harvard Medical School, for example, have developed practical strategies for reducing radiation dose associated with CT and PET scans.

“CT scans are useful but the radiation is high,” relates Goh. “The goal of MGH/Harvard Medical is to try and reduce the radiation dosage that you get from a CT scanner. 3 millisievert is what we get normally in one year on most places on earth. You just go for one CT scan and you will reach that limit very quickly.”

Positron Emission Tomography (PET) scans show which tissue is consuming the most energy, thereby highlighting the area of cancer, but to do that, you need to inject radiated glucose. A researcher at Harvard Medical is experimenting with using lower radiation levels. The images come back grainy, but by applying supercomputing, he is able to extract the low signal from the high noise.

Wider access to compute power is having a democratizing effect that is leading to some interesting use cases far from HPC’s roots. The United States Post Office, for example, is using supercomputing and scanning devices to sort through half a billion pieces of mail a day to ensure that the postage is correct and authentic. Supercomputing also left its mark on the stock exchange. After the May 6, 2010, mega-glitch, aka the Flash Crash, caused the Dow Jones Industrial Average to plummet by about 10 percent, only to bounce right back, a researcher from the University of Illinois, Urbana-Champaign collected two years of data and put it on two supercomputers, one at Pittsburgh and one at the DOE. The work uncovered a source of market manipulation that prompted the SEC to enact more transparent reporting requirements.

At the same time as HPC grows into new markets and segments, the traditional application areas are as relevant and vital as ever. High-resolution global climate models, for example, continue to push the supercomputing envelope. Goh spotlights some of the main findings of these models and the work of the Intergovernmental Panel on Climate Change (IPCC). Even the most modest carbon emission scenario shows a temperature raise of 2 degrees, he says, which translates into a 2-3 foot sea level rise. This may not sound like much to the layperson, but a rendering shows how severe the result will be.

Rounding out the session, Goh and his NASA colleague describe some of the remarkable space research that is enabled by HPC, including the search for exoplanets as part of the Kepler project. NASA, ESA, CERN and others are also doing awe-inspiring work peering into the deepest darkest recesses of the cosmos, in attempt to reveal a time before light. “How do you peer back even further,” asks Goh, “One way is to simulate that far back, and that’s being done by Large Hadron Collider.”

In closing, Goh issues this challenge:

“For those who are HPC experts, feel united in this, for those who are new to HPC, leverage it, make a difference with it. And because of that, we the HPC experts must remember this: it is very important to convince the few to get your funding, we must also work very hard to convince and delight the many, through our simplified explanation of why HPC matters, so we can get more people using HPC together.”

Expect to see more of the HPC Matters program through 2015 and beyond and if you weren’t able to attend the inaugural event, look for the lecture on YouTube in the coming weeks.