Two years ago, a 9.0-magitude earthquake off the east coast of Japan killed more than 15,000 people and caused over $200 billion worth of damage. A year earlier, 200,000-plus died from a quake in Haiti. In 2004, a massive temblor in Indonesia killed an estimated 230,000.

These hit close to home for those of us who live in California, where the San Andreas Fault runs for 800 miles.

Cui's work is part of an effort coordinated by Southern California Earthquake Center (SCEC) ' dubbed Cybershake 3.0 ' to create a new state-wide seismic hazard map using 3D waveform modeling that will improve earthquake forecasts and help engineers design safer buildings and retrofit existing high-risk buildings.

The team plans to reach their goal by running large scale simulations on GPU-accelerated supercomputers such as Oak Ridge National Laboratory's Titan and Georgia Tech's Keeneland.

The image shows a snapshot of ground motion of the 2008 magnitude 5.4 Chino Hills earthquake in an east-to-west direction; the red-yellow and green-blue colors depict the amplitude of shaking. The simulation indicates that small-scale heterogeneities (causing the highly irregular pattern of shaking in the image) may significantly affect ground motion in geologic basins. Simulation by Efecan Poyraz/UC San Diego and Kim Olsen/SDSU. Visualization by Efecan Poyraz; map image courtesy of Google.

Petascale Performance on Titan Supercomputer

To meet the needs of the CyberShake 3.0 project,Cui realized they would need 750 million CPU hours on a traditional CPU-based supercomputer, costing over $800,000 just in power cost to support his simulations. That's when they turned to GPUs for help.

AWP-ODC, the research team's primary seismic application, is more than 5x faster when run with GPUs, allowing researchers to discover insights they would not have been able to before. At the same time, they would save over $600,000 in power costs for their simulations.

Less than a month ago, Cui's team achieved over one petaflop of performance running on over 8,000 GPUs on the Titan supercomputer, shattering their previous record of 220 teraflops of sustained performance on Oak Ridge's Jaguar supercomputer. The video below shows the results of their previous work on Jaguar.

Faster Results Lead to Safer Buildings, Saving Lives

In the past, researchers were limited to simulations that required less computation at lower wave frequencies, which feel like a 'roller-coaster' motion. While lower frequency simulations are useful for predicting how high-rise buildings will respond in a quake, more common, low-rise structures suffer more damage from higher-frequency shaking, which feel like a series of sudden jolts.

But high-frequency simulations demand significantly more computation. So much more that they're possible now with GPUs. New scenarios can run on supercomputers to understand how a broader range of buildings will respond- particularly for the low-rise structures that most building engineers care about.

The goal is to help engineers design safer buildings in California by producing U.S. Geological Survey regulated seismic forecast data products. The hazard map will ultimately offer details on the impact of earthquakes in specific building sites, helping engineers design new buildings or retrofit existing structures in high risk areas.

While we may not be able to prevent earthquakes, thanks to Cui and his SCEC collaborators, and GPUs, we can now have more computing power than ever to provide better seismic hazard assessments, inform safer California building codes, and prepare for the 'big one.'

If you are using GPUs to do science, we'd love to hear from you in the comment box below.