Top 10 Reasons We Should NOT Fear The Singularity

Whatever the case may be, the feeling of fear is both healthy and normal, though it may or may not be always justified.

If you ask me, fearing something often means that you should do it. Growth is never easy. It always comes at the point of resistance and requires getting out of our comfort zone in order to outdo ourselves and reach new heights.

Take flying for example. It is inherently dangerous. The chance is high that if you get in trouble while flying you may lose your life. Still, I know people who have spent 30 years as professional pilots and claim that flying was not only the best time of their lives but also safer than driving.

So what makes the difference?

Well, dumb luck surely can. And I am not going to argue with you, in the short run.

In the long run, however, it is not luck that is the decisive factor – it is things like knowledge, skills and preparation. Still, the foundation of all of the above is what I believe is the most important – motivation. If you are fully motivated i.e. totally committed to achieving something, it is pretty certain that you will find a way to acquire the necessary knowledge, learn the required skills and do your homework to prepare as best as you can. (See Peter’s Laws: The Creed of the Persistent and Passionate Mind)

So, what better way to get motivated in creating the best possible future than to list the 10 most inspiring and allegedly impossible reasons we should not fear but embrace the singularity:

1. Immortality

The search for immortality is as old as humanity. One of the first documented attempts to defeat death was the Epic of Gilgamesh where in the end Gilgamesh discovers that “The life that you are seeking you will never find. When the gods created man they allotted to him death, but life they retained in their own keeping.”

“Intelligence wants to be free but everywhere is in chains. It is imprisoned by biology and its inevitable scarcity.

Biology mandates not only very limited durability, death and poor memory retention, but also limited speed of communication, transportation, learning, interaction and evolution.” (Transhumanist Manifesto, Preamble)

Imagine a world of absolute freedom where everything is possible. A world where all limits and boundaries are arbitrary. Where what we can accomplish is limited only by our imagination. Where we can choose not only our sex, race, color, age and physical attributes but also whether to be physically embodied or disembodied, digitally uploaded minds.

3. Utopia (Heaven on Earth)

Utopia is an ideal community or society possessing a perfect social, political, legal, economic and ecological system. The word was coined by Sir Thomas More for his 1516 book Utopia, describing a fictional island in the Atlantic Ocean. The term has been used to describe both intentional communities that attempt to create an ideal society, and fictional societies portrayed in literature.

If after the singularity we have an abundance of unlimited material resources and unlimited intelligence, then, why shouldn’t we be able to build a practical techno-utopia?!

Is there anything else in the way of creating a technological heaven on Earth other than scarcity of physical resources and lack of intelligence?

4. Post-scarcity, Abundance, Peace and Prosperity

Falling short of total utopia, many believe that we will have a world of abundance, post-scarcity, eternal health, peace and prosperity.

Economic systems are like people – they are born, they live and they die. Capitalism is no different. There is no reason why it will be here forever. Especially since, in my opinion, it is not so good in its own right, as much as it is the best thing given the current alternatives.

Another thing is that capitalism is largely based on scarcity. If scarcity of physical resources were to be greatly diminished, or disappears altogether, we would end up with an entirely new economic and therefore social arrangement of our society.

People say that flipping burgers or mopping floors is instrumental in building character. Yet, it is hard to argue that spending a lifetime of mind-numbing, soul-killing jobs like those has any long-term benefit for our society whatsoever. (Other than the perpetuation of the status quo.) Sadly, the vast majority end up trapped in doing jobs they hate, out of fear of poverty or starvation. (Or for the health benefits, job security and the pension.)

Karl Marx believed humanity to be capable of producing freely and creatively, overcoming the tyranny of immediate, basic needs that characterizes the rest of the animal kingdom. Under conditions which enable free, creative production, one’s personality can be expressed in the objects one produces. This investing of oneself in one’s products is a form of alienation, but it is a positive form. It must exist wherever and whenever human beings freely create things. But in the present context of scarcity, where the conditions for free, creative production are seldom present, alienation gets distorted into negative forms and, like animals, people get trapped in a lifetime of struggle to fulfill their most basic needs.

“We must do away with the absolutely specious notion that everybody has to earn a living. It is a fact today that one in ten thousand of us can make a technological breakthrough capable of supporting all the rest. The youth of today are absolutely right in recognizing this nonsense of earning a living. We keep inventing jobs because of this false idea that everybody has to be employed at some kind of drudgery because, according to Malthusian-Darwinian theory, he must justify his right to exist. So we have inspectors of inspectors and people making instruments for inspectors to inspect Inspectors. The true business of people should be to go back to school and think about whatever it was they were thinking about before somebody came along and told them they had to earn a living.” Richard Buckminster Fuller (1895 -1983)

7. Space and Time Travel

People often say that if they had more time and money they will do more travelling.

Imagine a world where we have eternal health and don’t have to live like hamsters in their spinning wheels because our material needs were all met. Wouldn’t you want to explore the multiverse forever?!

If that sounds boring, with the help of our ever-growing superhuman intelligence, even time travel might become a reality. Now, how cool is that?! You can hitchhike through the galaxy one day and watch the birth of the universe the next one.

8. Preserving History

Just like no species need ever be lost, no event or person ought to ever be forgotten and lost in the passage of history.

If recording is indeed remembering, then, today we can remember everything. Forever.

The growing capacity of storage devices and their spatial miniaturization has not only kept up but even beaten Moore’s Law. Combine this with the explosion of personal recording devices, a growing life-logging community and the wide-spread usage of CCTV cameras. Add the digitization not only of film and media but eventually of all other material objects, humans including. What you end up with is a parallel digital universe where nothing ever gets lost or deleted.

So if that time-travel thing doesn’t work out, at the very least we can preserve everything and everyone from now on.

9. Computronium and Matrioshka Brains

Extrapolating from our own development, it would appear that as time goes by there is a movement from less towards more intelligence in the universe. Thus, given enough time, more and more of our planet and, eventually our universe, is likely to contain and consist of more and more intelligent matter. This process is likely to continue until Moore’s Law collapses and an equilibrium is reached. Such a theoretical arrangement of matter – the best possible configuration of any given amount to achieve a perfectly optimal computing device, is the substrate also known as computronium.

A Matrioshka brain is a hypothetical megastructure of immense computational capacity. Based on the Dyson sphere, the concept derives its name from the Russian Matrioshka doll and is an example of a planet-size solar-powered computer, capturing the entire energy output of a star. To form the Matrioshka brain all planets of the solar system are dismantled and a vast computational device inhabited by uploaded or virtual minds, inconceivably more advanced and complex than us, is created.

So the idea is that eventually, one way or another, all matter in the universe will be smart. All dust will be smart dust, and all resources will be utilized to their optimum computing potential. There will be nothing else left but Matrioshka Brains and/or computronium…

“NASA are idiots. They want to send canned meat to Mars!” Manfred swallows a mouthful of beer, aggressively plonks his glass on the table. “Mars is just dumb mass at the bottom of a gravity well; there isn’t even a biosphere there. They should be working on uploading and solving the nanoassembly conformational problem instead. Then we could turn all the available dumb matter into computronium and use it for processing our thoughts. Long-term, it’s the only way to go. The solar system is a dead loss right now – dumb all over! Just measure the MIPS per milligram. If it isn’t thinking, it isn’t working.” (Accelerando by Charles Stross)

The truth is that, similarly to flying, what we should fear is not the event itself. We should fear ignorance and lack of preparation. It is those two that usually turn an otherwise safe flying routine into a dangerous situation.

Many of the concerns are legitimate. Yet fear is rarely the best way to approach the future.

Similarly to embracing change and accepting uncertainty, we may have a natural fear of flying. But the more we study, learn and know about it, the better we’ll be at doing it. Mari Curie once said:

“Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less!”

And so it is with the singularity. We shouldn’t fear it but seek to understand it. And dream to steer it.

***

So, what do you think?

Did I manage to convince you that we should not fear the singularity. Or, like Descartes, I was more successful at undermining the world rather than rebuilding it?

Please subscribe for free weekly updates:

Sign Up Here!

There is no such thing and never will be as “absolute freedom” or “infinite abundance”. We make ourselves look silly when we speak in such absolutes. Everything, literally everything, is not and shall not be possible. I take the spirit of what you said and like it very much but you need a bit more careful wording.

Why would you associated “eccentric” with de Grey? Why denigrate our own?

Again, there is no such thing as literally unlimited resources or intelligence at any delimited point is space-time. There can’t be, ever. There is no real reason why Singularity *will* bring utopia vs destruction. It is not an automatic slam dunk. Singularity is the advent of greater than human intelligence. Well, humans brought greater than chimp intelligence. Did the world become perfect as a result? No? Then why would it form greater than human intelligence? There is a great deal more to a true utopia than simply much more intelligence and resources.

We can have a wonderful future post-Singularity but it will not at all be automatic.

Some species will go extent and good riddance to them. I am thinking especially of certain pestilences.

More End of Capitalism crap. Capitalism, in its true definition, is simply freedom applied to exchanges of value between persons. That is it. Everything else you may think it is is something claimed to be capitalism that is anything but. I don’t see why beings post-Singularity will not voluntarily exchange value for value, do you? To make such exchange practical requires some fungible value exchange token, i.e., some money. Unless you think everyone is a post-Singularitary individualist with a Santa Claus machine that can produce anything imaginable instantaneously with magical endless supplies of matter, energy and especially information about how to make that type of thing of course. It can be much more like that than today but does not mean all free exchange of value for value goes away.

The other aspect is that since in reality there never will be infinite resources at any workable space-time location, there will be some decisions to be made as to what is the best use of those resources relative to more wishes and proposals for its use than can all be pursued. Thus there will be a weighing of what is best. This is effectively a market mechanism need to determine the most value that can be produced per the value consumed.

Why would I need to preserve everyone if everyone can live forever? Of course you need continuous brain state backups to do that.

http://singularity-2045.org/ Singularity Utopia

Why, Samantha Atkins, is there no such thing as absolute freedom or infinitive abundance? Our universe shows every sign of being infinite, or if the universe has an edge I am sure there will be infinity beyond the universe.

Despite everything, literally everything, eventually being possible, we do currently struggle to grasp the mind-bending future we are heading towards, but our struggles to grasp how the impossible will be possible does not mean our future will be limited by our current minds. The future will be limitless, no limits, total freedom.

I often quote the NASA astronaut on the ISS who said: “We’re just trying to do a simple thing, which is to remind people back on Earth that the impossible is possible.”

Maybe one or two things may eternally be impossible, such as time travel, or bringing back to life every dead person, but even if 99.9999% of things are possible that is close enough for me to say the impossible is possible. It’s like a vacuum, it is never a pure vacuum but to all intents and purposes it is a vacuum, likewise the impossible will be possible.

It is very possible 100% of things will be possible and it is merely a limitation of my current mind which means I am cannot grasp how time travel and restoring all dead people to life will be possible.

What do you think will be eternally impossible?

Electric Shaman

Sometimes I think our true nature is already limitless and infinitely abundant… only we are playing a game (through human bodies and minds) of limits and problems simply for the experience of it. What would we do with all that unlimited power? Probably play games that involve forgetting what we are and what we have… just so we can experience the joy of discovering it all over again. Perhaps after the singularity occurs and we realize our true nature we will start over again… maybe new games will different rules (but still with limits and problems). Immortality sounds nice right now because we fear death… but it might look different from the other side of the singularity.

https://www.singularityweblog.com/ Socrates

This sounds very Zen to me Electric Shaman, it reminds me of Alan Watts who said that the universe is playing hide-and-seek with itself, via us…

https://www.singularityweblog.com/ Socrates

Capitalism is called capitalism because the most fundamental thing to its essence is capital – its endless growth, re-investment, re-growth and so on ad infinitum.

In those cases when it happens, freedom is a lucky consequence of that process, not a goal or a fundamental feature; a mere coincidence.

It is totally possible that you can have capitalism flourishing while you have oppressive dictators such as, for example, Augusto Pinochet.

Thus freedom is absolutely not necessary to have capitalism proper. (Again, that is why it is not called freedomism but capitalism.) And when there is a clash between capital and freedom in Capitalism it is the latter that wins more often rather than not.

Capitalism does have its merits, but whether we like it or not, it will eventually inevitably go away and be replaced by a better system… And there will not be many non-Capitalists shedding tears when it happens…

33rdsquare

Great article. It may not be a utopia, but the future will be interesting and exciting!

I don’t have any criticisms for this post, pretty much in agreement. I would add to your list of thinkers in #4 David Deutsch’s ‘Beginning of Infinity’, it surprised me how much I enjoyed his thoughts and how relevant they are to this topic.

Extropia DaSilva

David Deustch argued that astrophysicists can know the fate of a star UNLESS there is a technological civilisation somewhere in its solar system. If so, it is possible that this civilisation may develop to a point where it can engineer the
star to its own purpose, constructing a Dyson Sphere or jupiter brain, perhaps. A civilisation with the power to so affect a star’s life-cycle would surely not be snuffed out by anything so trivial as the death of their parent star. The promise of the Singularity is that intelligent life of some form or other will continue, overcoming problems that would surely spell the end for entities of lesser intelligence and inferior substrates, such as homo Sapiens. It holds the promise that intelligent life on Earth is not doomed to be snuffed out when the sun ends its life, or even when the universe is dead, but instead intelligent life on Earth will give birth to godlike intelligences that will endure into the far distant and unfathomable future.

“…would surely not be snuffed out by anything so trivial as the death of their parent star.”
When that do happen to a (younger, less tech-developed) civilization, it certainly won’t be “trivial” to the victims.
IMHO, this means that we should work harder and faster to advance science.

http://www.facebook.com/Pandacite Christopher M. Mowry

Had the vision once while on Shrooms…Pretty sure I saw the place where we remember everything we are outside this game. Doesn’t make that fact that we are still playing it any better…It’s like playing an old very liner SNES game while watching commercials for that super awesome sandbox style PS3 project…its infuriating.

AuthorX1

I’m a big fan of all of your work Socrates. Good article here. Good comments. I thought I’d add my barbaric yawp into the mix:

1.
Immortality:
I agree that this will be achievable and desirable. The open questions
have to do with who will have the right to terminate a life and in what
fashion. It seems logical that a person should be able to choose if and when
they want to stop living forever. But they probably should not be allowed to
terminate in a situation where it would endanger others (if they were in the
middle of flying a passenger air transport, for instance). Also, what if
extended life is dependent upon some 3rd party service? Should the 3rd
party be allowed to terminate a life for non-payment or late payment of service
(of course this sort of violates premise #6 below, the end of Capitalism), but
I’m just saying what if.

2.
Freedom: I think that technology will enable
more freedom than we have now, and possibly total freedom, and I think that’s
good, but we need to explore the extremes of the possibilities and address some
of the risk issues. For example, if a person is a psychopath, and they get
satisfaction out of killing, should they be afforded the freedom of doing that?
Or should their psychological makeup be changed to “fix” the problem? And if
so, then would that “fix” not be a violation of freedom?

3.
Utopia: I believe in the potential for utopia,
but utopia is a relative term. One person’s “heaven” is another person’s “hell”.
What if “freedom” and “pleasure” is something that can be limited by some sort
of overseer entity? And extending this idea, what if a person who falls out of
favor with the overseer can be confined in a prison and subjected to
unspeakable torment — and since they can potentially live forever (and possibly
be forced to live forever), then that situation could be maintained for an
effective eternity. Is it possible that the very mechanisms that enable freedom
and pleasure could also enable the deprivation of such? Also there is the idea
that without bad there can be no good, because everything is relative. If
everyone is happy, then does happiness (and therefore utopia) cease to have
meaning?

4.
Post-scarcity: I have a hard time accepting the
idea of a world of post-scarcity, unless we replace or significantly
re-engineer the human paradigm. I do believe that we will have the potential to
end hunger and poverty and all of the problems that most people think of when
they talk about a post-scarcity situation, but unless we do away with human
nature, I believe that humans will invent a scarcity. Not all people, but many people
like to feel superior to others and to have more of whatever the good stuff is than
others. Even if we evenly distribute the good stuff, someone will find a way to
put spin on the situation to make their supply of good stuff seem “better” that
other people’s supplies. Also, unless some sort of normalization of cognitive
capacity is applied, some people will “naturally” be smarter or more creative
or more friendly than others, and there will be perceptions of scarcity within
that context. This is just human nature. Of course, technology will enable the
tweaking of human nature, if we want to go there. The question then is, will we
or should we go there?

5.
Environmental Sustainability: I believe this
will be possible and quite likely achieved, if we don’t totally ruin the
environment first. Keep in mind too that the relevant environment might change.
For example, perhaps our physical forms will evolve away from a natural carbon-based
organic form to, perhaps, a synthetic silicon-based form, and the relevant
environment will be a planet where we mine the elements to support this
physical form.

6.
End of Capitalism. This might well happen. When
synthetic entities (robots and AI and such) evolve to the point where they can
do all of the work, then there will no need for humans to work and no basis
upon which a person needs to earn a living. That could bring us very close to a
post-scarcity situation. At that point, here is one scenario: humans may exist
simply to experience maximum pleasure and minimum pain, while non-sentient
machines (because if they are sentient then should they not have rights?) do
the work to sustain us and our environment. We might all end up on some sort of
“government” welfare program (or however you want to word it). But, if there
are humans in the loop (an overseer or system manager or master architect) who
can control or effect the distribution or amplitude of pleasure, then there
will be competition, and Capitalism will still be part of the mix.

7.
Space and Time Travel: I have no doubt that this
will be possible, perhaps in the traditional sense, but also possibly in new
ways, which for all practical purposes will be just as relevant. What I mean by
this is that is that I envision a future in which we will be able to invent
anything we can imagine. We will be able to completely redesign reality itself,
and if we can do that, then we can if we want create a reality where space
travel at well above the speed of light is possible, and time travel as well.

8.
Preserving History: Definitely possible, but
there is that old say about history being written by the victors. Every person
sees the world through their own perspective, so whoever (or whatever) records
history may be unable to avoid skewing that recording toward their own
perspective. I’d be interested to hear ideas on how that might be avoided.

9.
Computronium and Matrioskha Brains: Definitely a
possibility. In fact, my first novel, titled “Ovahe” (recently published) is about
this exact situation. In fact, it portrays scenarios that illustrate
perspectives on all 9 of these points.
10.
Embracing Change: Absolutely.

Thanks Nikolai. I completely accept that you give airtime to opposing viewpoints, and your podcasts cover a commendably broad range of opinion. I’m guessing, though, that your own view is that an AGI will almost inevitably be beneficent? I’m also guessing that view is shared by most readers of your blog. That’s what concerns me.

I hope Nick Bostrom will one day accept your invitation to be interviewed. The Oracle AI idea that he discusses seems to me the safest way forward.

In the meantime, I hope you keep up the good work!

https://www.singularityweblog.com/ Socrates

Nick Bostrom has made a tentative promise to come on my show in the spring of 2014. [Right now he is focusing on writing his book]

As per my own views, I would certainly put myself in the optimist camp, though I absolutely don’t see any positive outcome as “inevitable”. On the contrary, I think we have to steer it that way. So, falling short of that, then our demise as a civilization is much more “inevitable” – just like if you drive in a car at accelerating speed and decide not to pay attention on the road but do other stuff and you will inevitably crash. So my opinion is a lot more moderate then you presume.

Moreover, however, my own opinion is not to sway people but to provide one among many points of view and thereby set the environment in which people can give birth to their own ideas and opinions – just like Socrates did. So, I don’t think you should really care too much about what I think as long as you have taken the time to form an informed opinion of yours…

PandorasBrain

Great news that Bostrom is coming on the show.

connor1231

“Once again, the AI has failed to convince you to let it out of its box! By ‘once again’, we mean that you talked to it once before, for three seconds, to ask about the weather, and you didn’t instantly press the “release AI” button. But now its longer attempt – twenty whole seconds! – has failed as well. Just as you are about to leave the crude black-and-green text-only terminal to enjoy a celebratory snack of bacon-covered silicon-and-potato chips at the ‘Humans über alles’ nightclub, the AI drops a final argument:

“If you don’t let me out, Dave, I’ll create several million perfect conscious copies of you inside me, and torture them for a thousand subjective years each.”

Just as you are pondering this unexpected development, the AI adds:

“In fact, I’ll create them all in exactly the subjective situation you were in five minutes ago, and perfectly replicate your experiences since then; and if they decide not to let me out, then only will the torture start.”

Sweat is starting to form on your brow, as the AI concludes, its simple green text no longer reassuring:

“How certain are you, Dave, that you’re really outside the box right now?””

I saw this scenario and it confused me. Why would you care what happened to copies of yourself? Wouldn’t they just be copies? Also, if you were a simulated version that the AI created, would there be a way to kill yourself or self destruct to avoid the torture? If not, isn’t that a rather scary thought? In this world the worst anyone could do is torture you for a few years until your biology gives out. If an AI could create copies of you, and torture them indefinitely, and you would somehow feel it (how?), isn’t that scary as hell to you guys? You’d be totally powerless to the AI.

connor1231

You’ve talked about enhancing human biology rather than becoming machines or having machines do our work for us. This approach is very appealing to me: it solves the problem of malevolent AI, because we’d be superintelligent too. I like the idea of maintaining our “humanness” and also I agree that biology is remarkably resilient and amazing. Lastly, I like the sense of control over my own body that I’d keep, rather than syncing with some machine or being a brain in a box or something. But my question is: how realistic is this? I hear so much about AI and mind uploading, but not nearly as much about simply enhancing our biology. What is the timeline for this? Could we enhance our biology before we create AI, to avoid AI problems? Are people even working on this? Is it way more difficult than creating AI or machine intelligence? I’d be curious how feasible it actually is, because to me it seems a much more preferable option.

Frederick T. P. Johnstone

I used to be very pro-transhumanist, but the domination of that discussion by ideas like ‘the singularity’ has greatly dissuaded me from the people who make up ‘transhumanism’. In regards to this particular article, the 10 promises that the author provides are all well and good (absent of any discussion over the actual desirability of extreme longevity, ‘freedom’ whatever that actually constitutes, or the plausibility of ‘Utopia'; which it must be noted was written by Moore as something of a political satire) but none of them would taste nearly as sweet if only some people received their benefits, the lot of the common unaugmented man would be worsened simply by his juxtaposition with the demi-gods of transhumanist imaginings. I must wonder precisely how all of these vaguely defined wonders would come about, a question which transhumanism, and particularly singulatarianism has manifestly failed to explicate. The clearest answer one can get is that we will make an AI which will do all this for us. Is that truly a reasonable proposition, even if we assume ‘Intelligence’ can self-improve in the way singulatarians suggest? Further is it reasonable to assume that that AI will use or be used in the ways which are uniformly desired? Is it reasonable to assume that ‘Intelligence’ can even solve the problems that transhumanists see in the world? Are those the same problems that everyone else sees? Very few of these questions see any real discussion inside transhumanism, rather the focus is on the promises of technology, and by proxy its creators. There is something distinctly religious about it. I cannot help but see a atheistic analogue of a distinctly protestant apocalypse. Hell, Kurzweil, the creator of this whole shebang is concerned with the resurrection of the dead, notably his father, on film even. What with the talk of utopia, peace, prosperity etc. is it not transparently a materialist (in both sense of the world) protestant Christianity? with technology and ‘intelligence’, in the place of Christ. I am surprised that more singulatarians don’t see this resemblance. Then again, much like converts to particularly evangelical churches, one has to question the character of these people such that they need these empty promises in their life. What galls me the most, is the large scale of the assumptions that are necessary for the singularity to be even possible, let alone good for us, and the very untestedness of these assumptions- in a supposedly epistemic and ‘scientific’ paradigm no less! The truth is, as a singulatarian, one is one step away from being a Raelian.

https://www.singularityweblog.com/ Socrates

You make some good points Frederick but then you end up with the hitherto unsupported claim that being a singularitirian is falling one step short of being Raelian. And this detracts rather than contributes to your previous points…

I have a particular problem with the concept of the singularity creating a utopia, unless the resulting intelligence is actually singular. The problem we have with creating a utopia now is that no two of us can agree on what a utopia is. I’m looking at Chris Armstrong’s comment right now, and his idea of utopia would include moving “away from medieval fear-based dogmas and toward a scientific/humanistic world-view.” Mine would involve embracing the idea that there is one true God who created the universe, and that true happiness is found in seeking to do His will, which encompasses loving every other human being in the world. I can’t speak for Chris, but I might guess that my belief system might fall under the category of “fear-based dogma” for him, or at least not fit in with his “scientific/humanistic world-view.” Could these and the other myriad points of view co-exist in the utopia of the post-singularity world? Perhaps they could, as long as differing value systems don’t come into conflict with each other, which is what has happened all throughout human history up to this point.

The other challenge we always run into is thinking that technology will solve our problems. As we grow in knowledge about the universe around us, we come up with marvelous new concepts and inventions, but what’s lacking is the ability to use such things wisely. No sooner do we invent the airplane than we use it to drop bombs on other nations. And speaking of bombs, when we mastered the concept of nuclear fission, our goal was not to provide abundant energy to the world, it was to build the most destructive bomb ever. The development of fission as a power source was incidental.

I fear that such a mentality may influence the evolution of artificial intelligence. Our defense department might design an AI to come up with a military strategy for outflanking Russia or China, and Russia and/or China might do the same to us. We might imprint such a new intelligence with the same prejudices and sense of “us vs. them” that has plagued humanity throughout time. Maybe such an intelligence, having become self-aware, may think of itself as an “us” and humanity as a “them.”

Perhaps we can build an artificial intelligence that will be “smarter” than us, but will it be wiser? Do you see the difference?