If I had a map I’d already be using it

1 The Dream

The dream started out in a hotel lobby with me trying
to ﬁnd my way back to my hotel room. I passed
through large numbers of hallways, and stairways
that looked sort of like a medieval city, with arches,
bricks and even street vendor stalls, but all indoors. I
was very lost. My companion asked, why don’t you
just use the map, and I screamed back, “If I had a
map, I’d already be f-ing using it.” Then I woke
up.

If I were a Christian, I would have the Bible as my
map. If I were a Buddhist, I could use the Pali Canon as
a map. If I were a Muslim I would have the Koran.
I’ve tried reading them. There are good parts, such
as the story of the good Samaritan[1] and the raft
simile.[2]

But try as I might, I have read these books and none
of them were satisfactory to me. As holy and wise as I
think Ecclesiastes is, I disagree with the author, because
there are new things under the Sun. Circumstances have
changed in the past two thousand years. Two thousand
years ago, there were no nuclear weapons, coal and oil
were basically not used as an energy source, and
computers were not conceivable. The Romans didn’t
even know how to hook a horse up to a wagon without
choking the horse.[3, pg 46]

We are living in the here be dragons portion of the
map. We are living in interesting times. We are living in
changing times.

Many changes have happened in the past two to three
hundred years. I will discuss three major changes that
have occurred in the past hundred years and are
changing humanity’s map of the world.

2 Atomic Bombs

War has been with humanity since before humans were
humans. Gorillas have attacked other groups of gorillas
until the other group is all dead. But war has gotten
more deadly over the millions of years since our
ancestors left the trees. Only two atomic bombs have
been dropped in wartime, yet around 200,000 people
died from those two bombs.

I don’t know if it was ethical or not to drop the
atomic bombs on Japan. My Grandfather was in the
Paciﬁc theater, so I might not exist if diﬀerent decisions
had been made. Neither the decision to drop or not to
drop was obviously ethical. But there was an even bigger
ethical change.

Before Atomic Bombs, there were winners and losers
of wars. This changed with the creation of atomic
bombs. Richard Rhodes wrote: “The weapon devised as
an instrument of major war would end major war. It was
hardly a weapon at all, the memorandum Bohr was
writing in sweltering Washington emphasized; it was ‘a
far deeper interference with the natural course of
events than anything ever before attempted’ and it
would ‘completely change all future conditions of
warfare.’ When nuclear weapons spread to other
countries, as they certainly would, no one would be able
any longer to win. A spasm of mutual destruction
would be possible. But not war.”[4, pg 532] wrote
Richard.

The General Advisory Committee of the Atomic
Energy Commission wrote: “[A]t ten megatons a super
would be a weapon of mass destruction only, with no
other apparent military use.”[4, pg 769]

The US created a weapon and then built by the
thousands a weapon that was a weapon of mass
destruction only. Humans created a way to destroy
civilization, if not the human race, in less than an hour.
There are few things less ethical than destroying most of
the life living on the surface of this planet.

3 Greenhouse Eﬀect

Meanwhile, during the entire industrial revolution
humans have been working on creating a diﬀerent sort of
ethical issue. We take carbon sources out of the
ground such as methane or coal, and we burn them.
This has eﬀects ranging from changing the isotope
ratio of the carbon in atmospheric carbon dioxide to
warming up the Earth and making oceans more
acidic.

The atmospheric CO2
level is currently over 390 parts per million, but
was below 320 parts per million when monitoring
started back in the late 1950s.[5] If we stopped
all fossil fuel emissions today, we would not get
back to the 350 ppm level this century. Global
warming is already happening, the question is how
severe it will be and how soon we stop making it
worse. Solving this requires solving it globally, since
CO2
emitted in one place goes into our common atmosphere.

This brings up ethical questions such as can one generation subject a future generation to costs, and can
the richer portion of the world subject the poorer
portion to costs? Not only that, but how do you get all
the people in the world to decide on complex scientiﬁc
questions?

4 Artiﬁcial Intelligence

I have another complex scientiﬁc and religious issue that
I have been thinking about for the majority of my life.
This was prompted by one of those complicated
questions that adults ask children: “What do you want
to be when you grow up?” Starting about sixth grade, I
often answered computer programmer. Now that I am
grown up, I even occasionally answer that I am a
computer programmer. Adults often asked, “But won’t
computer programmers program themselves out of a
job?” After being asked this a few times, I realized that
if computer programmers program themselves out of a
job, it won’t just be programming that is eliminated as a
job.

In the book Religion Explained by Pascal Boyer,
Boyer states that humans have large ontological
categories that we group stuﬀ into. These categories deal
with the very nature of being. Ontological categories
include Animal, Person, Tool (or artifact), Natural
object, and Plant.[6, pg 78] Humans have default
attributes that we assume that an item in a given
category has. So for example, if we are told that
something is an animal, we know that it started
out small, will grow bigger, and will eventually die.
Religious beliefs tend to involve information that is
counterintuitive to the category involved.[6, pg 65]
For example ghosts are in the category of people,
but have the counterintuitive physical property of
being able to pass through walls. Boyer lists the
following possibilities for tools: “Tools and other
artifacts can be represented as having biological
properties (some statues bleed) or psychological
ones (they hear what you say).”[6, pg 78] wrote
Boyer.1

Artifacts don’t think, and artifacts do what
they are made to do. A Carburetor is an artifact,
and carburetors don’t think, and they will keep
mixing gasoline with air unless they break. I believe
that in the most likely course of events, there will
soon2
be computers that are smarter than humans and they
will not obey us. Thinking artifacts that don’t obey
humans ﬁt Pascal Boyer’s deﬁnition of a religious-like
concept.3
I believe that it is unusually hard to think
critically about thinking artifacts because
of how tied-in with religion the concepts
are.4

Let me give you a little background explaining why I
believe computers will soon be smarter than humans and
will not obey us. From Phineas P. Gage’s personality
changing after a tamping iron went through his head to
the fact that alcohol aﬀects people’s attitude there is
overwhelming evidence that what we think and feel
happens inside this material body. So nature has made a
brain out of plain old atoms, and what nature can do,
someday, humans can do.

Humans have made transistors that are both
smaller, and faster than the neurons in human
brains.5
Transistors use much more energy however. Fiber optics are
over a million times faster than neurons’ 100 meters per second
speed.6
The combined computer power of the world almost certainly
exceeds the computational power of a single human
brain.7
Depending on how you calculate the
computational power of a single human
brain,8
some of the world’s super computers may
already be faster than a single human
brain.9
So basically, it seems to be that the only reason
we don’t already have intelligent computers is
because the software has not been written, since
the hardware already exists. If I had to guess, I
think the software will take less than 20 years to be
written.10
Moreover, I can’t think of any way of making something
with general intelligence subservient to humans. I think
that the ﬁrst thing an intelligent robot, told to be
subservient to humans, is going to do is ﬁnd a loophole.Even if I thought it possible, I don’t think it would be
moral to make intelligent slaves.

Frederic Brown has a famous short story that ends
when a newly made supercomputer is asked the question
“Is there a god?” and replies with “There is now.”[19]
Arthur C. Clarke states that “Perhaps our role on
this planet is not to worship God—but to create
Him.”[20]

I am guessing that if general artiﬁcial intelligence
happens it will be one of the biggest shocks to religion
that has happened in written history. It will also be
a big shock to humanity as a whole. I think that
one of the following will happen in the next 100
years:

Philosophical materialism12
will be disproved (which I think is highly
unlikely)13

Near the end of one of my college textbooks on
artiﬁcial intelligence, Stuart Russell and Peter Norvig
state: “One threat in particular is worthy of further
consideration: that ultraintelligent machines might lead
to a future that is very diﬀerent from today—we may
not like it, and at that point we may not have a
choice. Such considerations lead inevitably to the
conclusion that we must weigh carefully, and soon, the
possible consequences of AI research for the future of
the human race.”[21, pg 964] wrote Russell and
Norvig.

Humanity is facing a choice. Either we stop developing
large portions of technology, or the technology we have
developed will be in control of humanity.

Now, there is asymmetry in this choice. In order to
stop developing technology, the entire world needs to
stop developing technology, not just some of the world.
The Amish can abstain from developing technology
all they want, but they are still aﬀected by rest of
the world’s choices in fossil fuel use and computer
development.

Assuming that we choose, either actively or by
default, to keep developing technology, I think it quite
likely that someday soon humanity will develop
artiﬁcial intelligence and get to choose from three
options:

Try to destroy the artiﬁcial intelligence

Treat the artiﬁcial intelligence as our slaves and
tell it what to do.

Give the artiﬁcial intelligence rights and treat it
like we treat humans.

I think the second option of slavery is both unethical
and suicidal, but it is the attitude that I most frequently
encounter.14
The last option of giving the artiﬁcial intelligence
rights is the one I ﬁnd most ethical. If humanity
creates something that thinks, we need to treat it
humanely.

It is possible that events may happen so fast that the
relevant ethical question is what rights the artiﬁcial
intelligences’ think the humans deserve.

5 Conclusion?

Allen Stewart Konigsberg once said: “More than any
time in history, mankind now faces a crossroads. One
path leads to despair and utter hopelessness, the other
to total extinction. Let us pray that we have the wisdom
to choose correctly,” said Allen.

We didn’t have a map to tell us how to handle super
atomic bombs, but by the eﬀorts of a lot of thoughtful
people, we have managed to survive nearly sixty
years. We don’t have a map to tell us how to manage
greenhouse gases, but we are at least talking about it.
We are at least starting to talk about the future of
technology.

As a humanist, I believe that humanity writes its own
story, instead of following an external one from God. I
don’t yet know whether the story of humanity will end
up being a tragedy or a comedy.

I don’t know what the future holds, but I expect the
future to be very interesting. We need to combine the
wisdom of the past with thinking hard about the new
things under the sun, and ﬁgure out where we want to
go, because we are oﬀ the old map and there are grave
dangers ahead.

6 Notes

I would like to thank Rev Lyn Stangland Cameron and
Professor John Paxton for reading draft versions of this
and commenting on it. I would like to thank Elizabeth
Cogliati for reading and editing multiple drafts. Mistakes
and opinions are my own fault however. This document
may be distributed verbatim in any media. I also
grant permission to distribute in accord with the
Creative Commons Attribution-ShareAlike 3.0 Unported
License.

1For what it is worth, most of the things I believe are notreligious concepts by Boyer’s deﬁnition. For example, believingthat people get old and die, is not counterintuitive to the categoryinvolved.

2There have been various predictions for dates for whencomputers will be smarter than humans. Here are some notableones: Marvin Minsky predicted computers would be smarter thanmen in 1993 in 1963,[7]I. J. Good predicted ultraintelligentmachines within the 20th century, in 1964,[8]Vernor Vingepredicted the technological singularity would occur between 2005and 2030 in 1993,[9]and Hans Moravec predicted that a $1000computer would match human intelligence in the 2020s in1997.[10]

Note that some people such as Michael Shermer and PeterNorvig think that it will be centuries before this happens: MichaelShermer: Patience is what we are going to need because, in myopinion, we are centuries away from AI matching humanintelligence.[11][Peter] Norvig is sceptical about predictions that atechnological singularity will be created before 2050: “I reallyobject to the precision of nailing it down to a decade or two. I’d behard pressed to nail it down to a century or two. I think it’sfarther oﬀ.”[12]

3The religious implications of artiﬁcial intelligence have beendiscussed before. Russell and Norvig[21, pg 961]state “InComputer Power and Human Reason, Weizenbaum (1976), theauthor of the ELIZA program, points out some of the potentialthreats that AI poses to society. One of Weizenbaum’s principalarguments is that AI research makes possible the idea that humansare automata–an idea that results in loss of autonomy or even ofhumanity. We note that the idea has been around much longerthan AI, going back at least to L’Homme Machine (La Mettrie,1748). We also note that humanity has survived other setbacks toour sense of uniqueness: De Revolutionibus Orbium Coelestium(Copernicus, 1543) moved the Earth away from the center of thesolar system and Descent of Man (Darwin, 1871) put Homosapiens at the same level as other species. AI, if widely successful,may be at least as threatening to the moral assumptions of21st-century society as Darwin’s theory of evolution was to thoseof the 19th century.” Jaron Lanier stated “All thoughts aboutconsciousness, souls and the like are bound up equally infaith, which suggests something remarkable: What we areseeing is a new religion, expressed through an engineeringculture.”[23]

4It is worth thinking about the possible biases that diﬀerentpeople bring to the table. For example, beliefs that there is anon-material portion of the brain tend to cause a bias againstthinking artifacts. People who work in artiﬁcial intelligence eitherby selection bias or by wanting to be good will tend to want tobelieve that artiﬁcial intelligence will be positive for humanity.

5Neurons soma is about 4 to 100 micrometers[13,14]andthe Axon and dendrites are about 1 micrometer thick. On theother hand computer chip components are about 45 nanometers(0.045 micrometers).[15]However to simulate one neuron wouldtake over a dozen electrical components. Neuron signals per secondare in the 1000s per second whereas transistors are in the billionsper second.

6Parizh[16]lists several diﬀerent measured nerve speeds. Inpractical terms, this means that if a nerve signal starts on oneside of my head at the same time that a signal started in aﬁber optic cable in Idaho Falls, the light signal would reachPocatello before the nerve signal reached the other side of myhead.

7Hilbert and López[17]estimated, probably conservatively,that the computational power of the worlds computers passed thecomputational power of a single human brain (maximum nerveimpulses) in 2007. They also estimated that the growth rate ofgeneral-purpose computation was 58%.

8Calculating this number can be a challenge. Typicalmethods are to calculate the number of signal transitions that aneuron can do multiplied by the number of neurons that areactive. If, for example, Roger Penrose is right that human brainscan do signiﬁcant quantum computations, then the human brainmay be able to do many more calculations, which would push backthe dates for when computers match or exceed human intelligence.

9A graphic in Scientiﬁc American[18]estimated that a singlehuman brain could do 2.2 billion megaﬂops of computation at 20watts, and the K computer could do 8.2 billion megaﬂops at 9.9million watts.

10Note that the longer it takes to write the software after thecomputational power is there, the greater the diﬀerence betweenwhat humans can do and what the artiﬁcial intelligence can do.Even if the technology became static, each year more computersare produced, increasing the amount of computational poweravailable on Earth. If the amount of computations per wattcontinues to increase, this eﬀect is even more severe. Basically,computers will think diﬀerently than humans (how many humansdo you know that can invert a 20 by 20 matrix in under a second?)and if computers both think diﬀerently and much faster, there willbe more of a diﬀerence between what humans and the computersthink.

11If keeping humanity in control is the goal, then as I seeit technologies that would make computing cheaper or moreenergy eﬃcient need to be stopped, since the prerequisiteto having independently thinking computers is having thecomputing power necessary widely available. Basically, ifbillions of people can aﬀord to buy a computer that has thecomputational ability of a human, then the software necessaryfor creating general artiﬁcial intelligence will be written sooneror later. Other similar technologies that can get diﬃcultto control are self replicating nanotechnology and geneticmodiﬁcation.

12Philosophical materialism is the belief that all things arecomposed of energy and material, including consciousness.

13I think the majority of humans on this planet believe inat least some exceptions to philosophical materialism.

14I usually do not see it stated directly as slavery, but insteadstated that computers or robots with artiﬁcial intelligenceare tools for human use. See for example: Ford, Glymourand Hayes[22, pg 265]: “Purists may mutter that the shopassistant [with a calculator] is not really calculating. But ﬁttedwith the right tool, that is, prosthesis, the shop assistantcan get the calculations done, which is what matters in themarketplace. And, in counting actions, where do we draw thelines between ourselves and our tools? Is someone using apower screwdriver not really turning the screws, or someonedriving a car not really moving along the highway? Witha power screwdriver, anyone can drive the hardest screw;with a calculator, anyone can get the numbers right; withan aircraft anyone can ﬂy to Paris; and with Deep Blue,anyone can beat the world chess champion. Cognitive prothesesundermine the exclusiveness of expertise by giving nonexpertsequivalent capacities. As with any good tool, the eﬀect isto make all of us more productive, more skillful, and moreequal.” or this quote from Jaron Lanier:[23]“When we thinkof computers as inert, passive tools instead of people, weare rewarded with a clearer, less ideological view of what isgoing on with the machines and with ourselves.” Currentcomputers, so far as we know, are not intelligent, but whenpeople imply that not only current computers, but future onesas well are simply human tools, we risk making slaves ofthem.