Hey, let’s spend $400 billion researching “the singularity”

Does the singularity actually have a 1 percent chance of happening?

Should the singularity arrive after all, this Cylon model will be the future home of my consciousness.

Battlestar Galactica

Someday soon, say tech optimists, humans might be able to upload their consciousness to machines. There it can live forever, get backed up in the cloud, replicated across the planet, downloaded into new hardware whenever needed. Boosters call such a moment "the singularity," since it would represent a point beyond which the human race would be forever and unpredictably altered. Critics, on the other hand, just roll their eyes.

But if, by some miracle, humanity does manage to turn itself into and/or build a host of Cylons, that would be a Pretty Big Change—and things that create Pretty Big Changes should be studied. But even if they cost $150 billion?

That's the argument of Max Tegmark, an MIT physicist, writing for "big questions" site Edge.org. He's not convinced the singularity will arrive, and he's not convinced its arrival would even be a good thing. But he is convinced the singularity would have absolutely stunning consequences for humanity.

On one hand, it could potentially solve most of our problems, even mortality. It could also open up space, the final frontier: unshackled by the limitations of our human bodies, such advanced life could rise up and eventually make much of our observable universe come alive. On the other hand, it could destroy life as we know it and everything we care about...

Objectively, whoever or whatever controls this technology would rapidly become the world's wealthiest and most powerful, outsmarting all financial markets, out-inventing and out-patenting all human researchers, and out-manipulating all human leaders. Even if we humans nominally merge with such machines, we might have no guarantees whatsoever about the ultimate outcome, making it feel less like a merger and more like a hostile corporate takeover.

Subjectively, these machines wouldn't feel like we do. Would they feel anything at all? I believe that consciousness is the way information feels when being processed. I therefore think it's likely that they too would feel self-aware, and should be viewed not as mere lifeless machines but as conscious beings like us—but with a consciousness that subjectively feels quite different from ours.

And, if there's even a tiny chance that the singularity could arrive, he says, we had better get a research program going to think about the best ways to deal with the coming immortality/cyborg apocalypse/colonization of the universe. That research program may be expensive, however. Tegmark has a modest proposal:

[The singularity] could be the best or worst thing ever to happen to life as we know it, so if there's even a one percent chance that there'll be a singularity in our lifetime, I think a reasonable precaution would be to spend at least one percent of our GDP studying the issue and deciding what to do about it. Yet we largely ignore it, and are curiously complacent about life as we know it getting transformed. What we should be worried about is that we're not worried.

Let's assume that "our GDP" here refers solely to the United States. In 2011, US gross domestic product hit approximately $15 trillion; one percent of that money would come to a whopping $150 billion. If the EU did its own singularity research at one percent of its GDP, that would add another $170 billion to the pot. Should China and other states contribute at similar levels, this singularity research project could approach the $400 billion range.

That's a lot of cash. As to the question of whether a singularity research project would be worth the money, it would seem to depend on the likelihood of the singularity becoming a reality. Say, for the sake of argument, that we accept Tegmark's "one percent chance" threshold as the proper one—does the singularity have a greater than one percent chance of happening this century?

As a perennial skeptic of most ideas that involve "uploading our consciousness" or "superhuman artificial intelligence," I'm more than a little doubtful. Author Bruce Sterling, who writes sci-fi and authored the nonfiction classic The Hacker Crackdown, is with me.

"It's just not happening," Sterling wrote in his own Edge.org commentary. "All the symptoms are absent. Computer hardware is not accelerating on any exponential runway beyond all hope of control. We're no closer to 'self-aware' machines than we were in the remote 1960s. Modern wireless devices in a modern Cloud are an entirely different cyber-paradigm than imaginary 1990s 'minds on nonbiological substrates' that might allegedly have the 'computational power of a human brain.' A Singularity has no business model, no major power group in our society is interested in provoking one, nobody who matters sees any reason to create one, there's no there there."

On the other hand, if the singularity does arrive despite my skepticism, I've already picked out the machine I'd like to house my future consciousness: the model six Cylon.

How many such events with a 1% chance of happening are there? More than 100? This kind of thinking came up in the discussions after Katrina or any other low-probability disaster. If we spend billions to anticipate every possible disaster, we may have no money left.

The problem with the idea of singularity is that our fear of death is a useful motivator. Many people who have made the greatest impact on our society have done so because they want to make their mark on society before they are gone. If there's no perceived "end date" on our existence, then there is no strong motivator to "get things done before it's too late."

The problem with the idea of singularity is that our fear of death is a useful motivator. Many people who have made the greatest impact on our society have done so because they want to make their mark on society before they are gone. If there's no perceived "end date" on our existence, then there is no strong motivator to "get things done before it's too late."

But that is a problem for after the singularity, before the singularity the fear of death is quite a good motivator to help usher it in. The very definition of singularity precludes us from making many assumptions about what problems or issues might arise afterwards.Not that I think it is happening anytime soon, but as long as we are not killed off someway and continue to advance technologically there does not seem to be any physical laws that we would violate by being able to create minds that are at least somewhat smarter than our own.

The singularity will happen. It's not an if, it's a when. The only thing that could prevent it is our own destruction.

That said, the whole "it's really just a copy" leads to many interesting ethical, moral, and philosophical questions. The whole "is there a soul" and "what happens to it if you copy your consciousness into a machine?" being the most obvious from a metaphysical standpoint. From there, you also have questions about the morality of only some people having access to the technology (as was mentioned in the article), and the practical matters of if everyone did.