Nuclear failure may show limits of computers

View inside one of the San Onofre nuclear power plant's steam generators with the steam dome cut off, looking at the top of the steam bundle. Tubes begin at the opposite end from the viewpoint of this photo, bring hot reactor water up around the tube bend shown in this photo and then end back to the other end where the water returns to the reactor. Thus the tubes are heated. Then a second source of water that never goes to the reactor comes into contact with the hot outside of these tubes and flashes over to steam which spins the turbine blades and the electricity generator.
Southern California Edison

View inside one of the San Onofre nuclear power plant's steam generators with the steam dome cut off, looking at the top of the steam bundle. Tubes begin at the opposite end from the viewpoint of this photo, bring hot reactor water up around the tube bend shown in this photo and then end back to the other end where the water returns to the reactor. Thus the tubes are heated. Then a second source of water that never goes to the reactor comes into contact with the hot outside of these tubes and flashes over to steam which spins the turbine blades and the electricity generator.

I’ve been marveling at the ever-increasing power of computing most of my life, beginning when I wrote my first program for pay at 16, discovered Led Zeppelin and bought my first truly great car stereo.

But lately, as computer models have become the foundation for everything from drug design to engineering to predicting human behavior, I’m seeing signs that we’re relying too heavily on this particular technology. Some healthy perspective seems overdue.

Exhibit A is the San Onofre nuclear power plant in north San Diego County, which is being shuttered permanently after engineering errors crippled its steam generators.

Those mistakes will cost somebody $4.3 billion. Regulators will spend the next year or so deciding whether that somebody is you and me, or some combination of consumers, utility shareholders, manufacturers and a national insurance fund.

Engineers traced the cause to the failure of newly installed metal tubes that transferred heat from the nuclear reactors to steam generators. These tubes used a different alloy than a previous design.

But the new tubes vibrated too much. They rubbed against each other and metal supports, wearing out within months instead of 20 to 30 years.

A long report by Mitsubishi Heavy Industries, which manufactured the tube array, presented a comprehensive account of what went wrong. To vastly oversimplify, a computer model showed that vibration in one direction could be a problem, so the designers added some anti-vibration support bars.

But the extra supports didn’t really fix that problem, and vibration in a separate direction caused most of the failures. Designers missed this other kind of vibration completely, despite a thorough analysis of data from similar nuclear plants, using similar tube arrays.

It all sounds very scientific, and very thorough. Except that it failed. There is some discussion in the report of whether a different computer model might have spotted the problem.

Here’s the part that amazes me: I could find no mention that anybody actually physically tested the new steam tube array. By testing, I mean good old-fashioned testing, in which engineers take their bright idea from the computer lab to the machine shop and see if it works, a process that usually involves trying to make it fail. If it doesn’t blow up, you’re good to go.

This kind of testing was once so standard that I literally didn’t believe that the designers might have skipped this step. So, soon after the report was issued in March, I began asking officials at Mitsubishi whether it was possible they relied on computer analysis — without physical testing. Then I asked officials with Southern California Edison, the nuclear plant’s owner that acted as lead designer on the generator retrofit project.

I’m still waiting for an answer. At first I was told that this information was “proprietary.” Now I’m hearing crickets.

Such hunkering down is understandable. Lawyers are gearing up for the mother of all blame games. The steam generator upgrade alone cost more than $600 million. Edison and San Diego Gas & Electric, which owns 20 percent of San Onofre, have a total of $4.3 billion in costs at risk.

And I also realize that computer modeling saves vast amounts of money. After all, you need a nuclear reactor to develop the same heat and pressure required to test the complete San Onofre steam tube assembly. Even manufacturing 10 percent of it and shoving through high-pressure steam to check for vibrations was probably a daunting prospect.

To be clear, I’m not saying it wasn’t done, although it seems self-evident that proper testing would find vibration. Edison has argued that the new tubes were substantially similar to the old ones. And yet the tubes failed.

To make sure I wasn’t crazy, I called a nuclear engineer, a physicist and an aerospace engineer. They all said physical testing and experiment was still considered standard and necessary for any final version of a new product. For example, software still struggles with the fluid dynamics at the surface of a wing.

But they all said they are seeing less actual testing, as computers get more powerful and the software keeps getting better. Revealing what won’t work avoids the need for many prototypes. Yet final testing is still standard practice.