We Still Can’t Predict Earthquakes

A collapsed portion of the Oakland Bay Bridge is shown following the 1989 earthquake.

Reuters

In this Oct. 19, 1989 file photo, Workers check the damage to Interstate 880 in Oakland, Calif., after it collapsed during the Loma Prieta earthquake two days earlier.

Paul Sakuma / AP

Twenty-five years ago, millions of baseball fans around the country turned on their televisions expecting to watch a World Series game — and saw live footage of a deadly earthquake instead. The San Francisco Giants and the Oakland A’s, and the 62,000 fans watching them in Candlestick Park in San Francisco, felt the ground under them shake. The baseball commissioner thought it was a jet flying overhead. Oakland’s manager thought the crowd was stomping its feet. Then a section of the right-field stands separated in two by a few inches. Players ran to gather their family members and get out of the ballpark.

The earthquake killed 63 people — and might have killed more if there was no game on TV to keep people off area roads — and the series didn’t resume for 10 days. (The earthquake is the subject of the “30 for 30” film “The Day The Series Stopped,” airing Tuesday night on ESPN.)

There have been deadlier earthquakes, and costlier ones, but few that surprised more people the moment they occurred. The millions of television viewers didn’t get what they expected because scientists couldn’t predict when an earthquake would strike.

The Loma Prieta earthquake helped fuel efforts to change that, but not much progress has been made. A decade after the quake, Robert J. Geller, professor of earth and planetary science at the University of Tokyo’s Graduate School of Science, wrote that earthquake prediction “seems to be the alchemy of our times.”Seismologists have mostly forsaken their quest for precise predictions, turning instead to more modest projects like telling the public when the probability of an earthquake has risen to 1 percent from 0.01 percent. They can’t predict whether or when they’ll be able to do any better.

Today, earthquake scientists in the United States and several other countries are working on producing “seismic weather reports” — a phrase Thomas H. Jordan, director of the Southern California Earthquake Center, uses to describe a continuously updated, local estimate of the probability of an earthquake. Just as you can look up the probability of rain in your area, the seismic forecast would let you look up the probability of an earthquake — but it wouldn’t be nearly as accurate.

These forecasts won’t tell people precisely where and when an earthquake will strike, or what magnitude it will be. Instead, the forecasts will show whether the baseline probability of an earthquake has risen.

But even that relatively modest endeavor faces many challenges, including computing power, communicating risk to the public, and swaying skeptics within seismology. “Crying earthquake (wolf) is a potent way of blunting earthquake awareness and preparedness,” Kelin Wang and Garry Rogers wrote in the journal Seismological Research Letters earlier this year. Wang, a research scientist at the Geological Survey of Canada’s Pacific Geoscience Centre, thinks that earthquake forecasting is a promising area of research, but the trick is translating those forecasts into something that won’t be counterproductive when it reaches the public.

Jordan is less concerned about spooking the public. In a response to Wang, Rogers, and other critics in Seismological Research Letters last month, Jordan said that Americans have experience processing low probabilities for catastrophic events. We’ve grown accustomed to hearing about heightened awareness of terrorist attacks, and wildfire warnings have become a feature of Californian life.

Ned Field, a seismologist for the U.S. Geological Survey, thinks there’s an audience for short-term forecasts. Prospective buyers of earthquake insurance, homeowners considering whether to leave town or homeowners wondering whether to build a basement could all benefit. (The USGS is already using an app to test how to spread the word in case of elevated earthquake risk in southern California.) He envisions marrying short-term forecasts with the agency’s model that estimates an earthquake’s economic costs and fatalities. That would give the public a sense not only of how likely an earthquake is, but also how much damage it could create if it occurs.

If they work, the sorts of forecasts Jordan and the USGS have in mind must avoid being overly definite, but they can’t be too vague, either. As my colleague Nate Silver wrote in his book “The Signal and the Noise,” just before a deadly Italian earthquake in 2009, scientific technician Giampaolo Giuliani spoke as if earthquakes could be predicted. He said an earthquake was coming, and based his prediction on an unproven technique of measuring radon-gas emissions. Meanwhile, more reputable earthquake scientists spoke from the other extreme, taking the view that earthquakes were no more or less likely at any given time. Both were wrong: Giuliani’s forecast missed the time, place and magnitude of the tremors that killed 309 people in L’Aquila. But the scientists were wrong, too. They discounted the significance of small earthquakes in the area, which in retrospect were foreshocks of the bigger earthquake.

The middle ground is the one the USGS is seeking: to tell the public when seismologists know there’s an elevated risk, without overstating their confidence in the prediction.1

The agency has tried this before. In 2005, it introduced on its website a tool that allowed people to check the chances of an earthquake in their area. But the code powering the tool kept crashing, Field said, and the USGS removed it from its website in 2010.

The models used by Jordan and other scientists today analyze recent seismic activity to predict the probability of future earthquakes. Jordan and Field are focusing on a model derived from something called ETAS, or Epidemic-Type Aftershock Sequence Model, which projects the proliferation of tremors in the way a disease might spread.2

There are many other models besides ETAS. Just how many depends on how you count. Jordan estimates there are 400 models worldwide vying for pre-eminence and being tested by the Collaboratory for the Study of Earthquake Predictability, which he directs. Field says USGS alone is considering 5,000 models. One model can differ from another merely in how it represents the structure of the earth’s crust.

In theory, all these models should be competing in a test of which best forecasts seismic activity. In reality, there just aren’t that many high-magnitude earthquakes in the world that can serve as tests. That’s good news for anyone living on a fault line, but not for seismologists, who would like to know whether their models are correctly calibrated to pick up the greater risk just before a major earthquake. So instead the tests generally focus on whether the models correctly predict the frequency of lower-magnitude earthquakes. Globally, there is a reliable relationship between the rate at which these occur and the rate at which major earthquakes occur.3 But in any given spot, that relationship may not apply — and not all models can be tested globally because not all places have the level of measurement of crust structure and seismic activity that, say, California does.4 Also, a model tuned to pick up small quakes may not pick up bigger ones.

With so many competing models, there is another risk: The one that does best in tests might just be getting lucky.

Even the best model wouldn’t predict about 50 percent of the big earthquakes around the world. The quakes that go undetected would be the ones that seemingly come out of nowhere, without foreshocks. For instance, it’s unlikely the best model would have warned residents of Napa County of the greater risk of an earthquake before a big one struck this past August, killing one person and causing an estimated $1 billion in damages. And it’s hard to say if any model could have foreseen the Loma Prieta earthquake, according to Jordan, because not all the data the model uses to produce forecasts was available in 1989.

Jordan calls the hunt for a more precise earthquake prediction “a silver-bullet approach,” trying to find “some magic signal.” What he and his collaborators — there are more than 50 — are doing today “is very different … There’s nothing magic about it,” he said.

One prediction the forecasters are comfortable making is that we won’t get more definite predictions anytime soon — if ever. “I would not be at all surprised if earthquakes are just practically, inherently unpredictable,” Field said. “You never know; some silver bullet could come along and prove useful.”

Footnotes

Italy and New Zealand are also working to develop systems for continuously updating and publishing earthquake probabilities.

Their current work on short-term forecasting, which they’re readying for publication, is the third phase of a recent California research collaboration, called the Uniform California Earthquake Rupture Forecast (UCERF). The first phase was to establish the underlying frequency of earthquakes of various magnitudes. The second was to calculate the probability of earthquakes over the next 50 years. Both of these build on two earlier versions of UCERF. The latest version finds a greater chance of small and very big earthquakes in California over the next 50 years, but a smaller chance of all big ones (magnitude greater than or equal to 6.7).

Typically, for every increase of 1.0 on the magnitude scale, the rate of occurrence is multiplied by one-tenth.

“In California the fault network is very well-known, much better than in any other place,” said Warner Marzocchi, director of research for the seismology and tectonophysics branch of Italy’s National Institute of Geophysics and Volcanology in Rome.