Learning from Columbia

Subscriber Only
Sign in or Subscribe Now for audio version

The panel investigating the loss of the space shuttle Columbia is expected to release its final report in late August.

NASA expects the report to be so harsh that the agency’s Administrator, Sean O’Keefe, told his employees in June that it will be “really ugly … a nasty piece of writing. I can see that coming already.” Another NASA official warned his colleagues that the report would “sting and insult us all down to our very core.”

Parts of the report have been previewed in a series of preliminary recommendations. For instance, the board has called on NASA to find a way to inspect and repair shuttles in space, and has recommended that NASA obtain better images of shuttles during liftoff (perhaps using ships or aircraft) and in orbit (using the military’s spy satellites).

Although absolute certainty is impossible, the direct cause of the Columbia disaster was probably a piece of foam that hit the shuttle 81 seconds after liftoff. This theory was bolstered by a July test that showed how a chunk of foam could punch a gaping hole into the shuttle’s wing material — a dramatic demonstration reminiscent of the Challenger investigation, when Nobel laureate physicist Richard Feynman dunked O-ring rubber into ice-water.

But the Columbia accident teaches different lessons than those learned in the wake of Challenger. In 1986, engineers knew about the O-ring problem, but bad management decisions meant their concerns went unheard. In 2003, engineers didn’t know that foam could bust through the shuttle’s thermal panels because those panels were never tested for that possibility. NASA believed falling foam was not a significant danger to the shuttle, because that’s what the computer models showed — although there was no real proof.

It has become apparent during the course of the board’s investigation that NASA makes major assumptions about the shuttle based largely on computer simulations. During one press briefing, a board member complained about some figures he received: “It’s not real data at all. This is a simulation. And in fact, these measurements have actually never been made by NASA in the twenty-some years of the [shuttle] program.” The board’s chairman lamented that this problem arose “over and over again.”

There is a very important lesson here. Computer analysis saves money, and it is therefore used routinely by government and industry. But it’s no substitute for real tests. If a computer model is designed badly or doesn’t have the right inputs, then the results it produces won’t be correct — and decisions based on wrong results can endanger missions and lives. This conclusion applies not only to NASA, but to anyone who bases life-and-death decisions on computer models, like the Departments of Defense and Energy, which for the last decade have relied on simulations instead of underground tests for the maintenance of our country’s aging nuclear stockpile. Let’s learn the lesson of Columbia: when the stakes are high, real tests are needed to reveal unexpected problems.

The Editors of The New Atlantis, “Learning from Columbia,” The New Atlantis, Number 2, Summer 2003, p. 118.

Delivered to your inbox:

Humane dissent from technocracy

Exhausted by science and tech debates that go nowhere?