In the 1950s, encountering a driverless car was not what it is today. The self-driving vehicles that engineers are now building promise to save hundreds of thousands of lives per decade in the United States. But last century, a car without a driver was a major safety hazard.

Seventy years ago, a “driverless car” was really a runaway car. In 1943, for example, an out-of-control truck careened into a crowd of people in Stamford, Connecticut, then hit another car, which was set off on its own driverless path of destruction. Driverless cars had an ominous place in pop culture, too. In a 1960 episode of The Twilight Zone, a possessed Lagonda coupe revs to life and runs down its owner.

In the real world, driverless cars were a hazard because vehicles lacked robust safety features like reliable parking breaks and steering-wheel locks. It didn't take much for a parked car to be set into motion—by being struck, or even by gravity. Airbags hadn't yet been introduced and seat belts weren't standard; culturally, Americans didn’t prize safety in their automobiles the way they would begin to in the 1980s and 1990s, which meant carmakers didn’t prioritize offering such features.

Over time, as brakes improved, fewer cars entered dangerous states of driverlessness. And eventually, “driverless” took on a new meaning—one that’s now synonymous with safety, it turns out. Google’s self-driving vehicles have logged more than 1 million miles of test driving, yet have never been responsible for an accident.

If self-driving cars eventually become ubiquitous, as so many technologists predict, the use of “driverless” will likely change again. Just as a “horseless carriage” became a “car,” a “driverless car” may eventually seem redundant.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.

Adrienne LaFrance is the editor of TheAtlantic.com. She was previously a senior editor and staff writer at The Atlantic.