As with the disasters themselves, this calendric coincidence was created by the confluence of independent trends and conditions that conspired to set the stage for disaster. But in each space case, these impersonal forces were merely backdrop to the human decisions that through their flaws were the immediate causes.

It was at this stage --- the choices made or not made by human beings –- that each of these three disasters could have been averted. That the NASA space team failed to do so not once or even twice but three times is the true disaster. None of these people needed to die; their deaths taught NASA nothing that it shouldn’t already have known. And that’s the true tragedy of these three events.

Out of sight, out of mind
Spaceflight has its own inherent hazards, and if not respected, any of many factors can kill people. Recognizing this, engineers install backup hardware and escape systems and build in allowances for uncertainties -- all in an attempt to keep such external hazards at bay.

But the internal hazards -- what investigation boards have called the "flawed safety culture" -- have proven much more insidious. This is the realm of convenient assumptions, of complacency, of willfulness, of use of statistical superstitions, of a false familiarity with an unblinking foe. It is a culture made possible by an all-too-human aversion to facing unpleasantness.

It has become easy to look away from these horrible space disasters -- and I never call them "accidents," a term that relieves the people involved on the ground of ultimate responsibility.

NASA prefers to literally bury the wreckage in underground concrete crypts, to shove the investigation reports onto another bookshelf, and to allocate one day per year to honoring the dead while ignoring what killed them the other 364 days.

But spaceflight is not easy, and that particular "easy way" is a roadmap to doom. Especially when the chain of cause-and-effect logic leads right back into one’s own heart and mind, the ugly consequences of such triggering actions are hard to contemplate.

Grim memorial
A graphic example of this "aversion of eyes" can be found in the Challenger memorial services of the past, and the historical summaries now widely published. The heart of the matter is the clash between “73” and “207”.

For a few subsequent anniversaries to the 1986 catastrophe, on the appropriate day and at the appropriate hour, NASA workers were invited to gather for a period of silence. In Houston where I worked, it was at the center’s main flagpole.

According to the official NASA description of the ceremony, this was to last 73 seconds, “the duration of Challenger’s flight”. That’s what press accounts said, too -- look it up on the Internet, where references almost always say something like “The space shuttle Challenger explodes 73 seconds after launch, killing all seven astronauts aboard”.

But we were engineers and operators, not managers and media flacks, and we knew better. Challenger had been in flight for 73 seconds when it broke apart, and the cabin -- with its crew still alive but presumably (and mercifully) soon unconscious from anoxia -- continued its upwards, then downwards arc for another 134 seconds. This was more than two whole minutes of additional flight before the cabin hit the water, killing the astronauts.

This is the reality that was all too easy for most people to turn away from. After the 73 seconds of silence, as other space workers shuffled back to their desks, their duties to the dead ostensibly done, I and a few friends would continue to stand in silence for the true flight duration, the true last seconds of the astronauts’ lives, 207 seconds in all. We had had enough of comfortable make-believe. And so these days, whenever some space official who ought to know (and say) better uses the phrase “73 seconds”, you have one more unintentionally self-confessed averter of eyes.

Early in the 1990s, some NASA managers in Houston began planning a visible memorial to the Challenger disaster, and to Apollo 1 as well. It was to be a set of displays -- including actual wreckage from both spacecraft, that people could run their fingers over -- on a wide landing in the stairwell leading up to the main Flight Control Room in the Mission Control Center. Nothing ever came of it, alas -- those involved proved out of step with the new Dan Goldin regime and were shuffled off into retirement. The debris from the disasters remained safely hidden away, comfortably out of sight and -- as experience would show -- tragically out of mind.

Learning wrong lessons
In space as on Earth, bitter experience teaches that a good "safety culture" decays from a variety of causes. There is the lulling of anxiety through repeated success, or the loss of respect (or fear) for past experience. And sometimes it’s from the elevation of other measures of goodness higher than safety.

The NASA of the mid-1990s was an agency where political satisfaction of top management -- all the way up to the White House -- became the ultimate goal. This was seen in the approach to safety during the initial stages of the partnership with Russia aboard their Mir space station.

When asked about the hazards of dangerous fires aboard Mir, based on reports of a bad incident in late 1995 and a long series of earlier anecdotal incidents, NASA space station official James Nise replied in writing: “NASA is satisfied with the safety and reliability of Russian [on-board fire suppression] hardware.”

Little more than a year later, a fire nearly killed six men aboard the station (including one American), and the official in charge denied knowing about any of the earlier events: “Nobody ever told me about earlier fires on Mir,” astronaut Frank Culbertson, manager of the Shuttle-Mir Program, told a television news crew. Yet a subsequent internal NASA investigation found numerous documents in which engineers had expressed alarm over the fire hazard but had been rebuffed by their managers. “These issues are better raised before, not after a life-threatening event,” the report concluded ominously -- but nothing changed.

After the near-fatal fire, NASA again decided that it was diplomatically desirable to believe that Mir was safe. Prior to the next visit, its official conclusion was that “no new risks have been identified, and no problems are foreseen.” An official in Moscow told reporters, “It looks like we’ve gone through the darkest part and we’re headed toward the light,” and a headquarters official concurred: “We are very confident we are operating in a safe manner.” The man being sent to Mir, astronaut Michael Foale, believed it: “I’m not worried,” he told reporters. “The safety is perfectly assured.”

Then, when Foale and his spaceshipmates were very nearly killed by an air leak caused by a supply ship collision, the same officials agreed that the accident was a "good thing" because it taught them lessons about space safety. But what it seems to have taught them was that one could in fact screw up safety assessments again and again, and by dumb luck still not kill anybody. It seems to have taught NASA that they didn’t need to worry quite so much -- even when the worst happened, there was always some way out.

Real lessons
Looking back on the actual fatal disasters, clear patterns emerge. The common thread was a willingness to make comfortable assumptions in the known absence of hard data.

For the Apollo-1 disaster, the not unreasonable use of pure oxygen in the cabin at low pressure had been compounded by simply overpressurizing with oxygen at sea level pressure for the pre-launch test. Components and fire-fighting systems that would have been tolerable at the design pressures had never been tested at the higher pressures they were subject to.

As for actual fires, the design had demanded that no sparks occur -- not that sparks that DID occur be containable. And having a hatch that required ten minutes to open was based on the same convenient assumption that since nobody could think of ways a bad fire could start and spread, then mother Nature couldn’t either.

For the Challenger disaster, numerous "scrubs" had led to schedule pressure and news media mockery. Two upcoming planet missions had irrevocable launch dates, and could not slip. Meanwhile, NASA’s new administrator was on Capitol Hill meeting with congressmen that day.

When engineers said that weather was colder than ever tested and trended "away from goodness," and that the brittle booster seals had never been tested under those conditions, their managers were ordered them to "take off their engineering hats and put on management hats." The engineers were challenged to prove it was NOT safe to launch, and they had no data to do so.

Columbia pattern a familiar one
The sequence of events and decisions that doomed Columbia two years ago is a familiar litany. Foam shedding from the fuel tank during launch had become familiar, and had gouged the silica wing tiles but this had never led to dangerous levels of damage during fiery reentries. As for the entirely different materials that lined the most severely heated regions such as the wing leading edges and the nose, they had never been tested against foam impact -- it was just assumed they were even tougher than the silica tiles.

After Columbia blasted off and the tracking camera tapes showed the debris impact -- the largest ever, and one that seemed to hit an area that might well have included the wing leading edge -- all interest in making sure there hadn’t been damage was squelched. It was easier to ask for proof there HAD been damage, and lacking any, the easy assumption of goodness carried the day -- and denied the crew any chance of an emergency repair or rescue option.

Some technological endeavors seem to maintain an effective safety culture, even over decades, and NASA needs become more like them. It must evolve beyond its "exceptionalism," the idea that it’s the smartest team on the planet with nothing to learn from outsiders.

A good place to start is with the words of Admiral Hyman Rickover, father of the nuclear navy and founder of a safety culture with a remarkable record.

“Quality must be considered as embracing all factors which contribute to reliable and safe operation,” he wrote. “What is needed is an atmosphere, a subtle attitude, an uncompromising insistence on excellence, as well as a healthy pessimism in technical matters, a pessimism which offsets the normal human tendency to expect that everything will come out right and that no accident can be foreseen -- and forestalled -- before it happens.”