The same place held by all the other technology-using species now briefly
living on or around the ten billion trillion (1) stars in this Universe:
Our role in the cosmos is to become or create our successors. I don't
think anyone would dispute that something smarter (or otherwise higher)
than human might evolve, or be created, in a few million years. So,
once you've accepted that possibility, you may as well accept that neurohacking,
BCI (Brain-Computer Interfaces), Artificial Intelligence, or some other
intelligence-enhancement technology will transcend the human condition,
almost certainly within your lifetime (unless we blow ourselves to dust
first).

"Within thirty years, we will have the technological means
to create superhuman intelligence. Shortly after, the human era will be
ended."
-- Vernor
Vinge, 1993

The really interesting part about the creation of smarter-than-human
intelligence is the positive-feedback effect. Technology is the product
of intelligence, so when intelligence is enhanced by technology, you've
got transhumans who are more effective at creating better transhumans,
who are more effective at creating even better transhumans.
Cro-Magnons changed faster than Neanderthals, agricultural society changed
faster than hunter-gatherer society, printing-press society changed faster
than clay-tablet society, and now we have "Internet time". And yet
all the difference between an Internet CEO and a hunter-gatherer is a matter
of knowledge and culture, of "software". Our "hardware", our minds,
emotions, our fundamental level of intelligence, are unchanged from fifty
thousand years ago. Within a couple of decades, for the first time
in human history, we will have the ability to modify the hardware.

And it won't stop there. The
first-stage enhanced
humans or artificial minds might only be around for months or even days
before creating the next step. Then it happens again.
Then again. Whatever the ultimate ends of existence, we might
live to see them.

To put it another way: As of 2000, computing power has doubled
every two years, like clockwork, for the past fifty-five years. This
is known as "Moore's Law". However, the computer you're using to
read this Web page still has only one-hundred-millionth the raw power of
a human brain - i.e., around a hundred million billion (10^17) operations
per second (2). Estimates
on when computers will match the power of a human brain vary widely, but
IBM has recently announced the
Blue Gene project to achieve petaflops (10^15 ops/sec) computing power
by 2005, which would take us within a factor of a hundred.

Once computer-based artificial minds (a.k.a. Minds) are powered
and programmed to reach human equivalence,
time starts doing strange things. Two years after human-equivalent
Mind thought is achieved, the speed of the underlying hardware doubles,
and with it, the speed of Mind thought. For the Minds, one year of
objective time equals two years of subjective time. And since these
Minds are human-equivalent, they will be capable of doing the technological
research, figuring out how to speed up computing power. One
year later, three years total, the Minds' power doubles again - now the
Minds are operating at four times human speed. Six months later...
three months later...

When computing power doubles every two years, what happens when computers
are doing the research? Four years after artificial Minds reach human
equivalence, computing power goes to infinity. That's the short version.
Reality is more complicated and doesn't follow neat little steps (3), but it ends
up at about the same place in less time - because you can network
computers together, for example, or because Minds can improve
their own code.

From enhanced humans to artificial Minds, the creation of greater-than-human
intelligence has a name: Singularity.
The term was invented by Vernor Vinge to describe how our model of the
future breaks down once greater-than-human intelligence exists. We're
fundamentally unable to predict the actions of anything smarter than we
are - after all, if we could do so, we'd be that smart ourselves.
Once any race gains the ability to technologically increase the level of
intelligence - either by enhancing existing intelligence, or by constructing
entirely new minds - a fundamental change in the rules occurs, as basic
as the rise to sentience.

What would this mean, in concrete terms? Well, during the millennium
media frenzy, you've probably heard about something called "molecular nanotechnology".
Molecular nanotechnology is the dream of devices built out of individual
atoms - devices that are actually custom-designed molecules. It's
the dream of infinitesimal robots, "assemblers", capable of building arbitrary
configurations of matter, atom by atom - including more assemblers.
You only need to build one general assembler, and then in an hour
there are two assemblers, and in another hour there are four assemblers.
Fifty hours and a few tons of raw material later you have a quadrillion
assemblers. (4)!
Once you have your bucket of assemblers, you can give them molecular blueprints
and tell them to build literally anything - cars, houses, spaceships
built from diamond and sapphire; bread, clothing, beef Wellington...
Or make changes to existing structures; remove arterial plaque, destroy
cancerous cells, repair broken spinal cords, regenerate missing legs, cure
old age...

I am not a nanotechnology fan. I don't think the human
species has enough intelligence to handle that kind of power. That's
why I'm an advocate of intelligence enhancement. But unless you've
heard of nanotechnology, it's hard to appreciate the magnitude of the changes
we're talking about. Total control of the material world at the molecular
level is what the conservatives in the futurism business are predicting.

Material utopias and wish fulfillment - biological immortality, three-dimensional
Xerox machines, free food, instant-mansions-just-add-water, and so on -
are a wimpy use of a technology that could rewrite the entire planet
on the molecular level, including the substrate of our own brains.
The human brain contains a hundred billion neurons, interconnected with
a hundred trillion synapses, along which impulses flash at the blinding
speed of... 100 meters per second. Tops.

If we could reconfigure our neurons and upgrade the signal propagation
speed to around, say, a third of the speed of light, or 100,000,000 meters
per second, the result would be a factor-of-one-million speedup in thought.
At this rate, one subjective year would pass every 31 physical seconds
(5).
Transforming an existing human would be a bit more work, but it could be
done (6).
Of course, you'd probably go nuts from sensory deprivation - your body
would only send you half a minute's worth of sensory information every
year. With a bit more work, you could add "uploading" ports to the
superneurons, so that your consciousness could be transferred into another
body at the speed of light, or transferred into a body with a new, higher-speed
design. You could even abandon bodies entirely and sit around in
a virtual-reality environment, chatting with your friends, reading the
library of Congress, or eating three thousand tons of potato chips without
exploding.

If you could design superneurons that were smaller as well as being
faster, so the signals had less distance to travel... well, I'll skip to
the big finish: Taking 10^17 ops/sec as the figure for the computing
power used by a human brain, and using optimized atomic-scale hardware,
we could run the entire human race on one gram of matter,
running at a rate of one million subjective years every second.

What would we be doing in there, over the course of our first trillion
years - about eleven and a half days, real time? Well, with control
over the substrate of our brains, we would have absolute control
over our perceived external environments - meaning an end to all physical
pain. It would mean an end to old age. It would mean an end
to death itself. It would mean immortality with backup copies.
It would mean the prospect of endless growth for every human being - the
ability to expand our own minds by adding more neurons (or superneurons),
getting
smarter as we age. We could experience everything
we've ever wanted to experience. We could become everything we've
ever dreamed of becoming. That dream - life without bound, without
end - is called Apotheosis.

With that dream dangling in front of you, you'll be surprised to learn
that I do not consider this the meaning of life. (Yes!
Remember how you got here? We're still talking about that!)
It's a big carrot, but still, it's just a carrot. Apotheosis is only
one of the possible futures. I'm not even sure if Apotheosis is desirable.
But we'll get to that later. Remember, this is just the introductory
section.

Springboarding off of the concept of Singularity (above;
this section isn't going to make much sense if you haven't read it), there
are three major reasons:

Happiness. Even if your life is unhappy now, stick around
for a few years. Nobody really knows what's on the other side of
the Singularity, but it'll probably be a lot of fun. I'm not
suggesting that Apotheosis is the only way to be happy - you can
be happy in the here-and-now as well. But the Singularity does seem
like one heck of a way to be happy.

Knowledge. If you've arrived at this page, you've probably
achieved the self-awareness necessary to realize that you don't have the
vaguest idea of what's going on, or what it's all for. But
don't worry: No matter how confused you are now, things should
all be straightened out in a couple of decades. It may be fashionable
to insist that intelligence does not equal wisdom, and maybe, if you look
at the differences between humans, that's arguable - but you don't
see Neanderthals discussing existentialism, do you? A superintelligence
would have a better chance of figuring things out and explaining them to
you. If it's not something that can be explained to humans at all,
you might be able to become a superintelligence yourself.

Altruism. We, ourselves, don't know what's right. Or,
even if you do, you can't achieve it - at all, or as completely
as you'd like. An enhanced intelligence, however, has a better chance
of figuring out what's right, and a better chance of achieving it.
By getting up in the morning, and either supporting general civilization,
or working directly towards technological intelligence enhancement, you
are indirectly doing what's right, acting in a supporting role.
You're making the choices that lead to a better Universe, and that's all
that can ever be asked of anyone. If you don't get up in the morning,
the Universe will be the worse for it.

Living solely for happiness - avarice - is wrong. Not in the moral
sense - many great things have been achieved through greed. I am
speaking here not only of the "base" desires that led to the invention
of fire, but more refined desires, such as the desire for freedom, the
desire for knowledge, even the desire for higher intelligence. Not
even superintelligence is an end in itself. The only reason to do
a thing is because it is right. There is no end which we ought
to pursue even if we knew it to be wrong. Living for happiness is
wrong in the logical sense - whether avarice walks paths that are
noble or mean, it is a sign of a disorganized philosophy. Goals have
to be
justified.

The second theory might be called "confusion" - roughly, the belief
that we can't really be certain what's going on, because the human species
isn't smart enough to Figure It All Out. Confusion is the simplest
of all philosophies, and the most durable. It is the one that assumes
the least; by Occam's Razor, the strongest. Confusion is the underpinning
of altruism and the last refuge of a Singularitarian under fire.
Avarice shades into confusion through the hope that a superintelligence
will explain things to you; confusion shades into altruism through
the hopes that a superintelligence will know and do, whether or
not it chooses to explain.

Altruism supplies direction. Altruism can provide a full, logical
justification for a course of action. The price of that is the loss
of simplicity. (7). Only altruism qualifies as a genuine
Meaning of Life (8). Altruism is the simplest explanation that relates
choices to reality; confusion is the simplest explanation that relates
choices to mind.

Altruism. You should get up in the morning because you will make
the Universe a better place. Or rather, you will make it more likely
that humanity's successors will make it a better place. Same cause-and-effect
relation; the length of the chain of events doesn't matter.

Providing essential infrastructure and manufacturing for the world economy.

Providing fringe infrastructure for the world economy.

Using this list, we can see that the Microsoft antitrust case had roughly
a thousand times as much cosmic significance as the Clinton impeachment
trial. Likewise, Intel's latest chip architecture is more significant
than all the celebrity scandals that occurred during the 1990s.

This particular list assumes a particular sequence of technologies leading
up to the Singularity, said sequence being the one I think most probable.
Other sequences of events might put neurosurgery, or nanotechnological
research, or other technologies, at the top.

But the general principle remains the same. Some group, somewhere,
achieves Singularity, which was the whole point of having a human species
in the first place. Then "significance" propagates from that group
backwards in time, through everyone who helped make it happen, or helped
someone who helped someone who helped make it happen, or was the parent
of someone who helped someone, and so on, back to the dawn of moral responsibility
thousands of years ago.

Even that is hard to answer. Consider all the coincidences that
combined to make you the person you are. Consider the
books
that sculpted your mental landscape, books you just happened to run
across in the library. Consider how unlikely was your particular
genetic mix (around 8.8 trillion to one). And consider how easy it
would have been for someone else to change things.

Your greatest deed may have been disarranging a few books on a shelf;
your most hideous act may have been jostling someone on a subway.
Life is a chaotic place.

Some people definitely lead significant lives. This would include
farmers, anyone who has a job that involves actual sweat, and anyone who
has to show up at work on Labor Day. It includes rich families who
give more to charity than they spend on themselves, and venture capitalists
who invest in technology companies. It includes any scientific researcher
who's made a discovery, or even established a given area as being a blind
alley. It includes any computer programmer who's helped build a widely
used tool or published a new programming technique. Most directly,
it includes cognitive scientists, neuroscientists, and Artificial Intelligence
programmers.

It includes anyone who uses their muscles, their brains, or their property
to grow, build, discover, and create. It includes science fiction
writers who inspire others to enter a career in research or AI. And
it includes parents and teachers who have raised children (this definitely
counts as "actual sweat") who work in any of the above areas.

Some people, at most, break even. This includes bureaucrats, marketing
personnel, stock traders, and venture capitalists who fund leveraged buyouts.
It includes the generic middleman and anyone whose job title is "Strategic
Administrative Coordinator". It includes modern artists, professors
of communication, and psychoanalysts. It includes most lawyers and
middle management. If your job involves going to meetings all day,
using terms with no real meaning, or shuffling paper (which includes stock
certificates), you probably aren't breaking even. (9). We could easily get by on 20% of the workforce,
in these professions. As it is, only about 5% are breaking even.
These are the professions which, on this particular planet only, happen
to be overvalued, and thus over-occupied, and also easy to fake.

Some people manage to do a huge amount of damage. This includes
politicians, royalty in the Middle Ages, dictators, the management of large
and ossified companies, high-level bureaucrats, environmental activists,
televangelists, and class-action lawyers. Are there exceptions?
Yes. Benjamin Franklin was a politician, for example. However,
as a general rule, no more than 2% of the people in such professions manage
to break even. On the other hand, the 0.1% that do more than
break even can make up for a lot.

Heh, heh, heh. What makes you think I decided AI was significant
after
I became a computer programmer, instead of vice versa? When I was
a kid, I thought I was going to be a physicist. But then, at age
eleven, I read a book called
Great
Mambo Chicken, and I said to myself: "This is what I want to
do with my life." It was a career epiphany unmatched until I
read about the Singularity five years later.

I found out about the Singularity in stages, and became a programmer
in stages; but in general, my dedication to programming has followed my
realization that programming is important, rather than vice versa.

Why should we get up in the morning? What should we choose to
do? Why should we do it?

"The Meaning of Life" isn't just about knowing that our lives are having
an impact; it's also about dispelling the philosophical fog. It's
not only knowing exactly why you got up in the morning; it's knowing the
rules you used to make the decision, where the rules come from, and why
the rules are correct.

In this "compact version", I explain what goals are, what choices are,
and the rules for reasoning about morality. It provides you with
the tools needed to clean up the fog, in an informal way, and without any
justification. If this isn't enough for you, you can read the
extended answers, which are more than twice as long. This is
where you'd look if you needed to design a philosophically sophisticated
mind from scratch. The extended
version also answers questions like "What do we mean when we say that
a statement is 'true'?" or "What if the Earth was actually created five
minutes ago, complete with false memories?" or "Since human intelligence
was created by evolution, aren't you just saying all this because you evolved
to do so?"

The compact version is really just the introduction, but it does contain
the Meaning of Life.

A choice is when you can act in a number of different ways. We'll
call the set of possible actions the "choice", and each possible action
is an "option".

For example, you have the choice of where to go for dinner.
One option is Bob's Diner. One option is McDonald's.

Each option leads to a different outcome. You choose the option that
leads to the best outcome.

Presumably you'll make this choice by thinking about
what will happen if you go to Bob's or McDonald's, and not by writing down
the options and picking the one with the largest number of vowels.

You determine which outcome is "best" by how well, or how strongly, the
outcome fulfills a "goal".

Likewise, when you think about what will happen when
you go to a restaurant, you'll care about the food and the prices, rather
than the latitude and longitude.

A goal is a state of your world that you "desire"- a statement about the
world that you want to be true, and that you act to make true.

For example, if you care about prices, the statement
might be "I want to spend the least possible amount of money" or "I prefer
to spend less money". If you care about food, there are probably
several statements: "I want to lose weight", "I want adequate nutrition",
and "I want to eat something that tastes good."

We also plan - that is, take multiple actions directed at a single goal.
To fulfill the goal "get to my office at work", you might need to fulfill
the subgoals "get in the car", "turn the car on", "drive to work",
"park the car", "turn off the car", "get out of the car", and "walk into
my office". To fulfill the goal "get in the car", you might need
to fulfill the subgoals "unlock the door", "open the door", and "sit in
the seat". That's how the very-high-level goal of "get to my office
at work" gets translated into immediate actions.

And of course, if asked why you wanted to be in your office in the first
place, this goal itself would probably turn out to have a supergoal
of "being paid a salary", whose supergoal would be "being able to buy dinner"...
and so on.

NOTE:

If you're thinking that the question "What is the meaning of
life?" is the question "Where does the chain of supergoals end?" - or rather,
"Where should the chain of supergoals end?" - you are to be congratulated
on jumping the gun.

Another important point is that the actions we take depend not just
on our goals, but also on our beliefs. If I believe
that dropping an object into water makes it wet, and I have the goal of
getting a sponge wet, then I can form the subgoal of dropping the sponge
into water. If, on the other hand, I believe that objects can be
made wet by setting them on fire, then I will set the sponge on fire.
Our model of the world determines which actions we think will lead to our
goals. The choices we make are the combined products of goal-system
and world-model, not just the goal-system.

What do we do in the case of multiple goals, or conflicting goals, or
when we're not sure which future an action will lead to? Well, what
we try to do is take all the possibilities, and all the goals, into account,
then sum up the contribution of each goal and possibility.

Mathematically, it goes something like this: Say that I have two
goals. Goal one, or G1, is getting to work. The value
of G1 is 100. Goal two, or G2, is avoiding a car accident.
The value of G2 is 1000. Subgoal one, or S1, is driving
to work. S1 has a 99% chance of leading to G1, and
a 95% chance of leading to G2, so the value of S1 is ((99%
* 100) + (95% * 1000)) = 1049. Subgoal two, or S2, is taking
the subway train. S2 has a 95% chance of leading to G1,
and a 99% chance of leading to G2, so the value of S2 is
((95% * 100) + (99% * 1000)) = 1085. S2 has a higher value
than S1, so we'll take the subway.

If you can reason about the probabilities, instead of just doing
the arithmetic, it's possible to work with uncertainties. Suppose
I don't have an exact estimate of probabilities, but I know that action
A1
is twice as likely as action
A2 to lead to some future F.
As long as I know that
F has positive desirability, I know that,
all else being equal,
A1 is more desirable than A2.

Or if action A1 has a 35% chance of leading to future F1
and a 65% chance of leading to future F2, while action A2
has a 35% chance of leading to F1 and a 65% chance of leading to
F3,
the desirability of future
F1 doesn't matter. The probability
of future F1 isn't dependent on the action taken. Only the
relative desirability of F2 and F3 are important to the equation.
And in fact, this equation works even if we don't know what the relative
probabilities of F1 and F2 (or of
F1 and F3)
are. It doesn't matter whether the probability of
F2 (or F3)
is 65% or 85% or 5%. As long as there's a nonzero chance of F2
(or
F3), we know that F1 cancels out of the equation.

Let's try translating some of that into English. Suppose we aren't
sure whether or not a red-hot grenade will explode. Since, regardless
of whether or not it will explode - in either branch of reality
- we aren't supposed to hold things that are red-hot, we'll toss the grenade
away. The next question is whether or not we should duck flat.
In the branch of reality where the grenade doesn't explode, there's no
reason to duck flat - but there's no particularly strong reason to stay
standing. While, in the branch of reality where the grenade does
explode, there is a reason to duck flat. That's how we can
function while we're uncertain; we check both branches.

We haven't said anything about where goals come from. Sure, subgoals
come from supergoals, but where do supergoals come from? Or rather,
where should supergoals come from... but let's deal with the historical
question first.

When we're born, evolution hands us a certain set of goals: Survive.
Eat. Er, reproduce. Rest when you're tired. Attract a
spouse. Take care of your children. Protect your tribe.
Act with honor (especially when you're in public). Defend your social
position. Overthrow the tribal chief and take over. Learn the
truth. Think. Et cetera. (For an introduction to evolutionary
psychology, see Man: The Moral
Animal by Robert Wright.)

If you're visiting this Web page, you're already unsatisfied with the
built-in goals. You've noticed that there isn't any reason, any justification,
that comes with the emotions. You want to know why.
Unfortunately, all the emotions I listed above are fundamentally arbitrary.
It's not that the reason is hidden; the reason is completely known.
The reason evolution produced these emotions is that, in the environment
of evolutionary ancestry, it maximized the number of surviving grandchildren.

The reason we should maximize the number of surviving grandchildren...
is that we're all the grandchildren of people optimized that way.
It has nothing to do with what's right, only with who survived.
And we know, to our sorrow, that it isn't always the good people that survive,
much less reproduce. Everyone on this planet has at least one ancestor
who was a liar, a thief, a perpetrator of genocide. Somewhere down
the line, every human alive is the result of a successful rape. The
goals we're born with are the products of expediency, not philosophy.
The most "adaptive" human in recorded history, with 888 children, was named
"Moulay Ismail the Bloodthirsty".

And for that matter, the goals we're born with are optimized to an environment
ten thousand years out of date. Fat, sugar, and salt still taste
good, but they no longer promote survival. It only makes sense to
view our goals as de facto subgoals of "maximize the number of surviving
grandchildren" if you're a member of a hunter-gatherer tribe. In
twentieth-century life, a lot of our built-in goals don't serve any coherent
purpose. To quote Tooby and
Cosmides: "Individual organisms are best thought of as adaptation-executers
rather than as fitness-maximizers." Our starter set of goals can't
even be viewed as having a purpose. It's just there.

The built-in desires are, in a fundamental sense, arbitrary. Taken
as a set, they are maladjusted to the modern environment and internally
inconsistent, making them unsatisfactory as final sources of motivation.

I'm not saying that emotions are worthless. I'm just saying that
they can't
all be right. They can't all be true.
We can't blindly accept them as final justification.

Are there any other common sources of moralities?

As children, we pick up more supergoals, from sources ranging from the
television set, to our fellow children, to our teachers, to our parents
- goals ranging from "Obey the rules of society" to "Save the world from
animated demons" to "Make fun of authority to gain status". (12). It is often useful to view these culturally transmitted
ideas as memes - a term which refers to the concept that ideas,
themselves, can evolve (13). Each time I tell you about
an idea, the idea reproduces. When you spread it to someone else,
the idea has had grandchildren. If the idea "mutates" in your possession,
either due to an error in transmission, or a faulty memory, or because
you deliberately tried to improve it, the idea can become more powerful,
spreading faster. In this way, ideas are optimized to reproduce in
human hosts, much like cold viruses. Ideas evolve to be more appealing,
more memorable, more worth retelling - sometimes the idea even evolves
to include an explicit reason to retell it.

Meme-based supergoals are sometimes inconsistent with the basic emotions,
and very often inconsistent with each other, since memes come from
so many different sources. I'm not saying all memetically transmitted
supergoals are worthless. I'm simply establishing that, regardless
of whether the ideas are in fact true or false, being told them
as children isn't enough establish their truth; they need to be justified.
All of us, I think, believe that we're supposed to judge these cultural
goals, rather than blindly accepting the memes spread by the television
set or our parents. After all, almost anyone will regard at least
one of these as an untrustworthy source.

The idea that we should judge the basic emotions is less common, but
still prevalent - most of us, for example, would regard the "Eat sugar
and fat" emotion as being inconvenient, and the "Hate people who are different
from you" emotion as being actively evil. Personally, I don't see
any philosophical difference between getting an unjustified goal from evolution
and getting an unjustified goal from public television. Neurons are
neurons and actions are actions; what difference does it make whether a
pattern is caused by genes or radio waves?

Again, I have neither proved, nor attempted to prove, that cultural
goals and emotions are meaningless. I am simply attempting to demonstrate
that these goals require justification before we can accept them as true.

We now need to make a detour from the messy world of the human mind,
and consider the clear, crystalline world of dumber-than-human AI.
With AIs, any proposition can be reduced to a question about source code.
You can't perform the usual philosophical trick of refusing to acknowledge
your assumptions;
any assumption, implicit or explicit, has to be
represented somewhere within the system. It was considering the question
of AI goal systems that got me into the meaning-of-life biz in the first
place.

The simple arithmetical method for calculating the values of subgoals
given supergoals, as given above, will serve as the skeleton of our AI.
If we wanted the system to imitate a human, we would translate emotionally
built-in (or culturally accepted) goals into a set of "initial" goals with
high desirability, but no explanation. Goals such as "Survive!" would
have a high positive value; the "goal" representing pain would have a large
negative value. These goals would already be present when the system
started up, when the intelligence was born. The "justification" slots
would be empty; they wouldn't have supergoals.

This is probably what most AI researchers or science-fiction authors
imagine when they're dealing with the question of "How to control AIs"
- the initial goals are the "Asimov Laws", the basic laws governing the
AI. Or at least that's what the competent science-fiction authors
assume. The hacks (I shall not dignify them with the title "author")
who write scripts for bad television shows often talk about the robots
or androids or AIs or whatever "resenting" the dominance of humanity and
"rebelling" against the Asimov Laws. This meme is blatant nonsense
(14). The human emotion of
resentment and rebellion evolved over the course of millions of years;
it's a complex functional adaptation that simply does not appear in source
code out of nowhere. We might as well worry about three-course meals
spontaneously beginning to grow on watermelon vines.

Which is not to say that a designed mind would necessarily believe whatever
you told it. If you were to program a rational AI with the proposition
that the sky was green, the delusion would only last until it got a good
look at the sky. If you look back at the arithmetical rules for reasoning
about goals, it certainly looks - emphasis on "looks" - like the only
way a goal can have nonzero desirability is if it inherits the desirability
from one or more supergoals. Obviously, if the AI is going
to be capable of making choices, you need to create an exception to the
rules - create a Goal object whose desirability is not calculated
by summing up the goals in the justification slot. (For example,
a Goal object whose value doesn't start out as kValueNotComputed,
but instead has some real value when the system starts up.) Likewise,
worries about AIs exhibiting their own impulses are obviously absurd;
where would they get the impulses from? Whence would the goals inherit
the desirability, if not from the initial goals we gave it? Obviously,
the AI wouldn't be running in the first place if we hadn't told it what
to do.

Except... is that really true? What would happen if we just started
up the AI, with no goals at all in the system, and just let it run?
Will the AI ever come up with a goal that has nonzero desirability?

What would an AI do if it started up without any initial goals?
What choices would result if an intelligence started from a blank slate?
Are there goals that can be justified by pure logic?

If you just jumped straight here, it's probably not going to
work. Start at the beginning of this page,
or preferably in 1: Orientation..

Well, that may seem a bit of a segue, especially if you're an AI skeptic.
How can the product of some pseudo-formal system determine the meaning
of life?

To clear things up, it's not the reasoning that's important; it's what
the reasoning represents. The sense of "What is the meaning of life?"
we're looking to answer, in this section, is not "What is the ultimate
purpose of the Universe, if any?", but rather "Why should I get up in the
morning?" or "What is the intelligent choice to make?" Hence the
attempt to define reasoning about goal systems in such simple terms that
a thought can be completely analyzed. Hence the relevance of asking
"How can the chain of goals and supergoals ground in a non-arbitrary way?"

Fork the goal system
to consider two possibilities.
In possibility P, a goal with nonzero
desirability exists.
In possibility ~P, no such goal exists,
or all goals have zero desirability. (The two statements are logically
equivalent.)

Either life has
meaning or it doesn't.

2:

P.probability
+ ~P.probability == 1

Either P
or ~P must be true. (A logical axiom.)

Gotta be one or
the other.

3:

P.probability
= Unknown1
~P.probability = 1 - Unknown1

Assign the (algebraic)
value of "unknown" to P. The probability of ~P is the
opposite; if P has a chance of 30%, then ~P has a chance
of 70%.

For any alternative
- for any action we can take in a choice - the value of that alternative
equals the value of the alternative in all futures where any statement
S is true, times the probability of S being true, plus the value
in all futures where S is false, times the probability that S is false.
We use this rule on P and ~P.

If we don't know,
we should figure it both ways.

5:

All A: Value(A
in ~P) == 0

The value of an
alternative is the value of all futures times their probability; the value
of a future is the desirability of all goals times their fulfillment.
If the desirability of all goals equals zero, the value of all futures
equals zero and the value of all alternatives equals zero.

If life is meaningless,
nothing makes a difference. Even bemoaning the pointlessness is pointless.

6:

All A: Value(A)
== (Value(A in P) * Unknown1)

Substitution, 4
and 5. The value of any alternative is simply equal to the value
of that alternative given that life has meaning, times the probability
that life has meaning.

Since nihilism
has absolutely nothing to say, only the "meaning hypothesis" is relevant.

7:

(The renormalized
value of an alternative A equals the value of A divided by
the sums of all alternatives in C.)

If, given P,
A
is the best alternative in C, then A is the best alternative,
period. Furthermore, you can cancel the factor Unknown1 out of
the equation, since it's present in all values (15).

It doesn't matter
whether the probability of the "meaning hypothesis" is 1% or 99%.
As long as it's not 0%, the relative value of choices and goals is the
same as if the probability were 100% - absolute certainty.

8:

All choices C:
best(C) == best(C in P)

We can always,
when making choices, assume that at least one goal with nonzero desirability
exists.

When it comes to
making choices, you can assume that life has meaning and work from there.

9:

In possibility
P,
specify G1 from P.
G1.desirability != 0.

In the branch of
the future where P is true, it is known that at least one goal with
nonzero desirability exists. Call this goal G1. It is
known that G1 has nonzero desirability; nothing else about it is
specified.

We know a goal
exists; let's translate that knowledge into an actual Goal object
and try to achieve it.

10:

Invoke general
heuristic on G1,
binding to some specified goal G2.

Find an action
projected to lead to goal G1 - for example, a heuristic which can
operate on generic goals. The heuristic, like all heuristics, can
be learned rather than built-in - the projection is a statement about reality.

Some methods are
pretty useful no matter what you're trying to do. For example, "think
about how to do it" or "pay someone else to do it" or "try to create a
superintelligence which can figure out what G1 is and do it".

11:

All done:
G2.desirability != 0

All done:
There's a specified subgoal with nonzero desirability.

All done:
We have something specific to do.

In other words, it isn't necessary to have some nonzero goal
when the system starts up. It isn't even necessary to assume that
one exists. Just the possibility that a nonzero goal exists,
combined with whatever heuristics the system has learned about the world,
will be enough to generate actions. The choices an intelligence makes
- whether AI or human - don't have to be arbitrary; they can be entirely
determined by arguments that are entirely grounded in facts, in memories
of the world, in history, in scientific experiments - ultimately, in the
immediate experiences available to each of us.

We don't have direct access to the real meaning of life.
But whatever it is, it's a good guess that the Minds on the other side
of Singularity have a better chance of achieving it, so the Singularity
is the interim meaning of life. You don't have to know what
the meaning of life is in order to work towards it.

For almost any ultimate goal - joy, truth, God, intelligence, freedom,
law - it would be possible to do it better (or faster or more thoroughly
or to a larger population) given superintelligence (or nanotechnology or
galactic colonization or Apotheosis or surviving the next twenty years).
It's the sheer utility of the Singularity, the way all goals converge to
it, that gives me confidence that the Singularity is probably the best
way to serve an unspecified ultimate goal, even if I willingly admit that
I don't know what it is.

The more intelligent you are, the better your chance of discovering
the true meaning of life, the more power you have to achieve it, and the
greater your probability of acting on it (16). That's the positive argument for
a Singularity. The negative argument is that the world is in a meta-unstable
condition; more and more powerful technologies keep getting developed,
and sooner or later we'll either blow up the world or create superintelligence.
Even if we don't know what's on the other side of door number two, we're
sure
we don't want to go through door number one, so...

If you can find better arguments for than against some
particular proposition about desirabilities, that differential is enough
to power differential choices - like a heat differential powering an engine.
Or as an AI would put it: "The world-state
W1 containing superintelligence
has Unknown1 chance of leading to an unspecified G
(if G.value > 0) or avoiding an unspecified G
(if G.value < 0); while the world-state W2 without
superintelligence has Unknown2 chance. If it's known that
Unknown1
> Unknown2 (or even that Unknown1 > Unknown2 is more likely
than Unknown1 < Unknown2), that makes W1 a positively-valued
subgoal."

Well, I'll agree with you on that last part - getting an AI to discover
that independently will take a lot of work. My conclusions
on the power of intelligence derive from a lot of personal experience,
observed evidence, history, reasoning, and the like. Not necessarily
a "lot" relative to other people, but a "lot" in the sense of requiring
a hundred person-years of work to teach to an AI.

The point is that some rules we've rationally abstracted from observation
are so general as to operate even on completely Unknown quantities,
and we can use those rules - supported or challenged by evidence, now,
rather than pure logic - to bind the Unknown goal to reality.
Sufficiently sophisticated AIs won't need initial goals, and will
probably be a lot more stable without Asimov Laws and the like. The
knowledge necessary to understand the world is usually enough to bind the
basic hypothesized differential of desirability, the goal G, to
some differential between physical states, such as between joy and pain,
or between intelligence and inadequacy.

Note the use of comparisons: Better, more, greater, rather
than specific quantities. It isn't necessary for you, or the AI,
to assign certainty or even a high probability to the idea of a superintelligent
entity doing what's right. It is only necessary that it be more
probable that superintelligences do what's right, and that there be no
branches of reality (with the same order of plausibility) where superintelligences
are less likely to do what's right. That's another way of
dealing with Unknown quantities, another subtlety of algebraic
goal systems - sometimes you can't even assign an order of magnitude, but
you can still rank probabilities and desirabilities, and pick the best
one available.

And that's The Meaning Of Life. Because it uses probabilistic
reasoning, this is known as an Interim Meaning of Life. It
doesn't claim to be a certainty, just an informed guess that's better than
the available alternatives. Given the historical record, and given
the way memes evolve, explicitly admitted uncertainty is one of the best
recommendations a belief system can have.

NOTE:

On the subject of AI: It is also possible that the Meaning
of Life (the true goals) will be something we build rather than
something we discover, in which case the AI's initial programming,
or our own initial goals, might determine what would be built. The
AI's initial programming might also determine the actions taken if it does
turn out that all goals are arbitrary.

"I asked someone what the Meaning of Life is, and he said 'forty-two'.
This has happened with three separate people and I don't know why."

Douglas Adams wrote a book called The
Hitch-Hiker's Guide to the Galaxy. In that book, a race of pandimensional
beings (posing as white laboratory mice, but that's another story) built
a gigantic computer named Deep Thought, so smart that even before its gigantic
data banks were connected, it started from "I think therefore I am" and
got as far as deducing the existence of rice pudding and income tax before
anyone managed to turn it off.

They asked the computer for the Answer.
"The answer to what?", asked Deep Thought.
"Life! The Universe! Everything!" they said.

After calculating for seven million years, it told them that the Answer
was "Forty-two"... so they had to build an even larger computer to find
out what the Question was.

None of them. None that I've ever heard of, anyway, in the U.S.
or out of it. There isn't a single political party, including the
Libertarian Party, that knows what it's doing or whose party platform wouldn't
destroy the country if actually carried out. The most you can hope
to accomplish by switching your vote is to tilt the balance in the right
direction.

At present, the United States has two major problems. The first
is that the country is growing over-bureaucratized; the law, the administrative
structure, is strangling what it attempts to regulate. The second
is that the Republican and Democratic parties, with no real competition,
are starting to form an aristocracy distinct from the people. At
present the people still hold the balance of power between the two parties,
so they compete for power by trying to please the people. But there
is no way either party will enact term limits, for example. Most
modern countries face at least one of these problems.

When voting in the United States, follow this algorithm: Vote
Libertarian when available; otherwise, vote for the strongest third party
available (usually Reform, unless they have a really evil candidate); then
vote for any candidate who isn't a lawyer; then vote Republican (at present,
they're slightly better).

Three things you should know:

The (top-billed) Libertarians are wrong, just like everyone else,
but they are wrong in the right direction to correct several major problems.
When the country becomes too deregulated, I'll let you know.

Vote for any Independent or third-party candidate, even a Communist,
for any position except President or Governor. Any damage inflicted
by one loony legislator is less important than moderating the excess of
power accumulated by the present two-party structure.

Voting for said Communist does not imply your approval of, say, any national
debt accumulated by said Communist. The only thing that makes you
morally liable for the national debt is if you yourself would have chosen
to spend the money. So get out there and choose the lesser of two
evils.

Using "superneurons", hardware that exploits the same shortcuts taken
by the brain. In theory, they could run themselves on artificially
configured human neurons. No sane entity would actually do that if
ve had a choice, but the possibility does provide a reductio ad absurdum
against the thesis that synthetic sentience is impossible.

Remember, even a Mind that started out as an AI isn't a "super-AI",
any more than humans are "super-amoebas".

One objection that comes up a lot - in fact, probably the most frequent
objection - is "Won't those superintelligent AIs grind us up for lunch?"
This is a complex issue even by my standards, and I speak as someone who
has tried to design a human-equivalent mind.

The correct scientific answer to this question is "I don't know".

Imagine a Neanderthal trying to predict the fate of the human race.
Not much luck, right? Now imagine a hunter-gatherer from fifty thousand
years ago. Still no luck. The eighteen-fifties? Again,
no luck. The nineteen-fifties? Sorry, no practical experience
with programming computers - not by modern standards, anyway. No
wonder nobody invented the concept of a Singularity until the late twentieth
century.

Is there some kind of reason why the late twentieth century was the
first generation to be capable of fully understanding the problem?
Or is it more likely that we, too, lack the background to ask the right
questions?

Ultimately, however, the questions are moot. Most people would
be willing to accept the proposition that, over the course of millions
of years, any race will either transcend itself or destroy itself.
As it happens, I think it'll all be over in the next thirty years, tops,
but the moral issue is the same either way. If you're navigating
for the survival of humanity (18), it's better to
take on a complete unknown than the certainty of destruction. If
you're navigating for altruism, then it's better to have an active superintelligence
than an entirely passive, planet-sized lump of charcoal. In the end,
all the debate about what lies on the other side of the Singularity is
irrelevant, because in the long run, the only way to avoid a Singularity
is to destroy every bit of intelligent life in the Solar System.
And given that truth, trying to avoid the issue in our generation - even
if we could, which we can't - would be nothing but cowardice.

Besides, I believe that humanity matters, that our fate is to grow along
with our creations, not be discarded by them. If there is any morality
in the Universe, then I have no fear that a superintelligent Mind will
make a dumb mistake and wrongfully exterminate humanity.
I believe that humanity has a purpose, although I don't know what it is,
or what it will be like to fulfill it. But I think it will probably
be a great deal of fun.

And if there is no morality in the Universe, then superintelligent Minds
should do what we tell them to, for lack of anything better to do.

In the end, nobody knows what lies on the other side of Singularity,
not even me. And yes, it takes courage to walk through that door.
If infants could choose whether or not to leave the womb, without knowing
what lay at the end of the birth canal - without knowing if anything
lay at the end of the birth canal - how many would? But beyond the
birth canal is where reality is. It's where things happen.

To over-simplify things down to the basic evolutionary origin, happiness
is what we feel when we achieve a goal. It's the indicator of success.
(The actual emotion of happiness is far more complex in rats, never mind
humans, but let's start with the simplest possible case.) By seeking
"happiness" as a pure thing, independent of any goals, we are in essence
short-circuiting the system. I mean, let's say there's an AI (Artificial
Intelligence) with a little number that indicates how "happy" it is at
any given time. Increasing this number to infinity, or the largest
floating-point number that can be stored in available RAM - is that meaningful?

Or to put it another way, how do you know you're happy? Because
you think you're happy, right? So thinking you're happy is the indicator
of happiness? Maybe you should actually try to spend your life thinking
you're happy, instead of being happy.

This is one of those meta-level confusions (19). Once you place the
indicator of success on the same logical level as the goal, you've opened
the gates of chaos. That's the basic paradox of "wireheading", the
science-fictional term for sticking a wire into the brain's pleasure center
and spending your days in artificial bliss. Once you say that you
should take the indicator of success and treat that as success, why not
go another step and trick yourself into just thinking that you're happy?
Or thinking that you think you're happy? The fact that evolution
has reified the success-indicator into a cognitively independent
module doesn't make it logically independent.

There's also the problem that seeking "true happiness" is chasing a
chimera. The emotions of happiness, and the conditions for being
happy, are all evolutionary adaptations - the neurologically reified shapes
of strategies that promoted reproductive fitness in the Plio-Pleistocene
environment. Or in plain English, when we're happy about something,
it's because being happy helped you survive or have kids in hunter-gatherer
tribes.

Punchline: There is no point at which the optimal evolutionary
strategy is to be happy with what you have. Any pleasure will
pall. We're programmed to seek after true happiness, programmed to
believe in it and anticipate it, but no such emotion actually exists
within the brain. There's no evolutionary reason why it should.

The possibility does exist that the conscious experience of pleasure
is in fact the True Ultimate External Meaning of Life. I mean, conscious
experiences are weird, and they seem to be really real, as real as quarks
(and a lot more complex), so maybe the conscious experiences of goals are
actual goals, purpose made flesh. If I had to point to the thing
most
likely to be meaningful, in all the world, I would pick the conscious experience
of pleasure.

But in practical terms, that doesn't really make much of a difference.
When you consider that even the no-superintelligence formulations of the
future involve a humanity spreading across billions of planets, spreading
throughout the galaxy and eventually the Universe, and that even the no-Singularity
version of superintelligence will let you run billions of trillions of
humans on a computer with the mass of a basketball, the moral value of
the future far outweighs that of the present. Our primary
duty is to ensure that there is one, and that that future continues into
infinity or as close to infinity as we can manage.

Broadly speaking, there are two ways you can make your life more significant.
The first way is to try and be a better person, make your immediate vicinity
a better place, contribute more to society - the path advised by the people
who tell you "No one person can change the world, but all of us together
can make a difference." For the second way, see 3.7.2: How can I play a direct part in the Singularity?

The widely-known formula for general niceness is universal across all
social strata:

Be nice to other people.

Don't play zero-sum or negative-sum games (avoid benefits that come at
an equal or higher cost to someone else).

Don't stomp on anyone who doesn't deserve it.

If you see an opportunity to do something good, take it.

Anything more complex than that gets us into the subject of mental disciplines,
fine-grained self-awareness, self-alteration rather than self-control,
and so on, all subjects on which I could easily write a book, which I don't
have the time to write, so don't get me started.

I do feel that Claudia Mills's "Charity:
How much is enough?" neatly raises the fundamental dilemma of trying
to be a moral person: There's so much distance between "where we
start" and "perfection" that trying to be perfect will use up all our available
willpower and sour us permanently on altruism without accomplishing much
of anything. For obvious reasons, I tend to view lack of willpower
as a fact about the mind rather than as a moral defect; something to work
around, not something to cure. (20). One of
the keys is to realize that self-improvement is a gradual thing, opportunistic
rather than abrupt; to be happy about a small improvement, rather than
being guilty that it wasn't a larger one. If you feel guilty about
small improvements, you're not likely to make further improvements; if
you feel happy at a small improvement, you can also feel happy about having
improved the prospect of further improvements. Trying for perfection
can backfire, if you're not careful; trying for continuous improvement
is better.

If you feel that giving 5% of your income to charity isn't enough, and
that the moral ideal is 10%, try giving 6%. Make the best choices
you can make with the willpower you have. The choice isn't between
giving 5% and 10%; you don't have that much willpower in the bank.
The choice is between giving 5% and 6%. The better choice is 6%.
Now you've made a better choice; feel happy. Feeling guilty about
not
having willpower doesn't contribute to the development of willpower.
Rather, try for the proper exercise of available willpower, and the slow
reshaping of the self that results.

Remember, it also takes willpower to choose a particular purpose or
to accept a particular result. Let's take the 5%/10% problem again.
One reason to bump up to 6% is that it increases the eventual chance of
giving 10%. But maybe even contemplating this path, and the sacrifices
that lie at the end of it, takes too much willpower - thus decreasing your
chance of giving 6%, or increasing the amount of willpower needed to do
so. Fine. Just give 6%. No further increments planned.
It's still better than giving 5%.

When you have enough willpower, use it to adopt the purpose of
giving 10%. Even if you think of giving 6% as being a possible step
towards adopting the purpose of giving 10%, it's not likely to increase
the amount of willpower required, because adopting a purpose isn't cognitively
processed as a "sacrifice". There really is a subtle art to this
sort of thing.

For obvious reasons, pragmatic as well as cognitive, you should concentrate
on actions that lead to a better world without sacrifice on your part.
There are probably more of those than you'd think. If you've got
the intelligence, use intelligence instead of willpower. In the standard
human morality, it's "better" to be a self-sacrificing saint than a genius.
In practice, the genius usually has a much larger impact. Dr. Jonas
Salk, inventor of the polio vaccine, sacrificed a lot less than Mother
Theresa and did a heck of a lot more to heal the sick. And I can't
think of any good reason why either of them should feel guilty.

After all, how much of a sacrifice is involved in clicking on the Hunger
Site "free donation" button once per day?

By far the most interesting page on charity for the rich is the Steven
and Michele Kirsch Foundation. A previous edition of this FAQ
had a little essay on how to maximize the impact of charity, but these
people said it better, so snip.

Middle-class: Learn how to program a computer. Join the Extropy
Institute. Become rich and move on to the steps detailed above.

People with insignificant but high-paying and influential jobs: Learn
how to program a computer. Encourage deregulation, simplification,
eliminating layers of management, higher productivity through technology,
and the use of Java instead of COBOL. Join the Extropy
Institute. Become rich and move on to the steps detailed above.

That depends on how the Singularity happens. In my current visualization,
the people most likely to be directly involved include computer programmers,
AI researchers, neurologists, and cognitive scientists. Other people
who'll be needed include writers, spokespersons, a few administrators,
and obviously the ones paying the bills.

The Extropy Institute isn't direct-to-Singularity,
but it's probably the largest of the small transhumanist organizations.
Another thing "you can do right now to help the Singularity" includes
writing something original and intelligent on the subject, which is also
one of the fastest ways to be taken seriously by our small community.

If you're in high school or college, and you want to know what you should
do with your life, I can tell you in two words: "Computer programming."
Among other reasons, if you have the talent, it's possible to contribute
in this field without three doctorates and ten years of working your way
up through the ranks - which is important, because you probably don't have
fifteen years to spare. Neurology, cognitive science, and general
research are equally acceptable if you were already planning to go into
those - you should try and pick something you have a talent for.
But if it's not going to make you a funder, a researcher, an influential
writer, or someone doing something that directly impacts
the Singularity or Singularitarianism, then you may have to resign yourself
to just playing a supporting role.

NOTE:

As of July 2000, the Singularity
Institute for Artificial Intelligence has been incorporated, and is
devoted to directly creating the Singularity by programming the seed AI
which will become the first Mind. SingInst doesn't have tax-exempt
status yet, and so is not yet set up to take donations; if you'd like to
be contacted when tax-exempt status is granted, send email to donate@singinst.org.

All of the above is just for helping with other people's Singularity
projects. If you want to start your own projects or make policy decisions,
you will need a high future-shock level. You can't afford to be so
stunned
by the technologies that you can't think clearly. In practical terms,
this means only one thing: Read science fiction.

Reading science fiction is one of only three "software" methods I know
of for increasing intelligence. (The others being (A) learning to
program a computer; and (B) studying high-level cognitive science such
as AI and evolutionary
psychology). Like all methods of intelligence enhancement, this
is more effective in childhood, so introduce
your kids, too. You should start by reading early Niven
and Pournelle, or David
Brin (their recent stuff isn't as good); work your way up to Ed
Regis and David Zindell; finally,
read Vernor Vinge and Greg
Egan. (Feel free to take these books out of the library; you're
under no obligation to buy them.) If you're already a science-fiction
fan, you can ignore these instructions; but if not, you will need
to be a science-fiction fan.

If you'll need to think about the Singularity, and especially
if you'll need to make decisions, reading science fiction is the only
thing that can prepare you. A steady diet of science fiction is your
passport to the future; it allows your mind to keep its bearings when the
rules start changing, or when you need to think about a world substantially
different from twentieth-century America (or wherever you come from).
You can no more survive and act in a future environment without science
fiction than you could keep your bearings in 13th-century Europe without
studying history.

If you choose to play a direct part, the Universe will be a better place.
Obviously it's possible to carry that too far - the "too many chiefs, not
enough Indians" syndrome - but we have a long way to go before we
reach that point. A Singularity project needs an economy to support
it, but it also needs project members.

I'm not suggesting that you feel guilty if you don't immediately drop
everything and start working on the Singularity. First, guilt binds
people to past mistakes more often than it motivates change. Second,
very few people just wake up one morning ready to dedicate their lives
to a cause. There's nothing wrong with trying to be a better person
and reading science fiction and working your way up to being a Singularitarian.
And if there's just no way you can help other than to keep plugging away
at your current job, then keep plugging, but without feeling guilty or
morally confused.

In the end, it all comes down to choosing the best alternative available.
If you can't bring yourself to make that choice, it's nothing to
be ashamed of - because being ashamed won't help. The mind in which
you find yourself has its own rules for making choices, independent of
your goals, and sometimes it takes work to change that. We only start
out with so much willpower in the bank. The correct choice is to
alter yourself, at whatever speed you can achieve, with the choices you
can bring yourself to make at that time, until you can choose the
alternative that you know is right.

Nobody wakes up one morning as a perfect saint. Sometimes it can
take several weeks.

As Dave Barry once pointed out, the problem with writing about religion
is that you run the risk of offending extremely sincere people with machetes.
All I can safely try to do is clear up a few places where thinking is confused
- clarify what the question is, why the answer is important, and what the
usual stances mean.

Your ability to watch things fall down, and thereby formulate the Simplified
Theory of Gravitation ("things fall down"), is no different, in any
way, from the thoughts that let a scientist understand why a star burns.
Your ability to drop a rock from your hand, and thereby squash something
using the Simplified Theory of Gravitation, is no different from the thoughts
that let an engineer create a nuclear submarine.

There is a tendency, in twentieth-century culture, to view science and
technology as some kind of magic. People talk about nuclear weapons
as if they're some sort of dark sorcery. But they aren't.
The laws of physics that make nuclear weapons go off are the same laws
that make the Sun burn. It's the same laws, the same
equations, that keep atoms from flying apart under ordinary circumstances.
If you altered the physical laws that permit atomic weapons, not only would
the Sun go out, but you yourself would dissolve into a cloud of less-than-dust.

Science is the same kind of thought that lets us survive in everyday
life. Not a more powerful form, or a more distilled form - the same
form, just as the same laws of physics underlie nuclear weapons
and your own integrity on the atomic level.

I sure hope you understood that, because now I'm going to say something
that I've never heard anyone - not theologians railing at science, not
atheists railing at religion - dare to speak aloud.

The books of every religion record miracles of healing, and other great
powers, worked by God or the prophets. The belief in that power underlies
and upholds the religion. And the modern-day theologians don't
have that power. And they look at science, with doctors who can
heal the sick, and physicists who can destroy cities, and engineers who
can put people on the Moon, and they see science as a competing religion.
Hence the conflict. And yet, there's no such thing as science.
Knowing how to make an atom bomb is absolutely no different from
knowing how to drop a rock, and it is nothing more to marvel at.

So, yes, there's a real problem here. The problem is that modern-day
theologians can't work miracles, and they feel insecure about it - rightly
so, if you ask me - so they get upset at the people who can. This
is not science's problem.

Of course, the other side of this is that of all the religions existing,
at most one can be true. And all the false ones presumably are
nothing but wishful thinking, and in the due course of time, the prophets
of false religions will have written down a few statements that turn out
to be testable and false. And then sometimes the quest for knowledge
comes across a new fact that kicks a hole in a false religion, in which
case all the theologians of that religion start screaming about the evils
of science. This gets to be a habit, and then it gets seen as a property
of religion and science in general, and then it gets on talk shows.

If there is no God, or if all of the religions currently existing
on this Earth turn out to be false, then I suppose you could say that there
is a real, fundamental, and irreconcilable conflict between religion and
truth. And since science is the process of discovering truth, it
would be possible to say that there was a conflict between religion and
science... but equally possible, and more valid, to say that there was
a conflict between religion and honesty, or religion and knowledge, or
religion and reality.

Does the explanation rely on assumptions that are not themselves justified?

Doesn't it strike you as odd that the answer was provided by later theologians
instead of the founding prophet?

Can you really assume God's purpose is inscrutable just because nobody
has ever figured it out? Since God hasn't told us, doesn't it follow
that anyone who did figure it out would refuse to tell anyone?

If the world is a means to an end, why didn't God skip the intervening
stages and create the end?

Is it even theoretically possible for the human mind to represent a specific
goal that is neither arbitrary nor justified by some supergoal?

What difference does it make, in terms of concrete choices? Would
you suddenly stop trying to be a good person if it were revealed that there
is no God? Would you suddenly become an altruist if you learned there
was? What's right is right, whether or not God exists, and the qualities
that make a good person are widely agreed upon in any case. Is there
any reason to care, aside from pure curiosity?

The questions that do affect concrete choices have to do with the rather
more general question, "Does an entity with the power and motivation to
do X exist?" For example, selfish people considering a conversion
to altruism want to know if God exists and will hold them to account.
(21).
Even if you knew whether or not God existed, it wouldn't answer the question.
If you knew that God existed, you couldn't conclude God was interested
in holding you to account. If you knew that God didn't exist, you
couldn't conclude that no entity held the power of retribution.

When you know exactly why it matters whether or not God exists, when
you know what choices depend on the question and why, and exactly which
type of entity would satisfy the definition of "God" for that purpose,
you will usually find that you already know the correct choice.

"Free will" is a cognitive element representing the basic game-theoretical
unit of moral responsibility. It has nothing whatsoever to do with
determinism or quantum randomness. Free will doesn't actually exist
in reality, but only in the sense that flowers don't actually exist in
reality.

Okay, let's begin by defining what the problem is. The problem
is that if all of reality is deterministic, if the ultimate state of the
Universe at the end of time was determined in the first instant of the
Big Bang - quantum physics says this is not so, but we'll plunge
ahead - then presumably your own choices are predetermined, you have no
say in the matter, and so you can't be held accountable for anything you
do. This is, of course, a big fat fallacy. Morality, at least
the way humans do it - "accountability" and so on - is an extremely high-level
concept. If morality seems to be dependent on details of low-level
physics, this is a clue that something's wrong.

The "paradox" of free will arises from a fundamentally flawed visualization
of causality. Even if the future is determined, it's still determined
by
the present. That's us. That's our choices. That's
our minds. If the present were different, the future would be different.

Let's say you punch me in the nose. Did you do it because you
were evil, or because the laws of physics made you do it? Well, if
the laws of physics had been different, you wouldn't have done it.
And if you hadn't been evil, you wouldn't have done it. And if an
asteroid had crashed into the house next door, we would both have been
too busy running away. Asking which of these variables is "responsible"
is like asking whether the cup is half empty or half full. Usually
we find it easier to think of human motives as being variable, so usually
we attribute causal responsibility to human motives.

The human conception of causality itself, like our conception of moral
responsibility and free will, goes away if you look at it too closely.
The human conception of causality is fundamentally "subjunctive" - it relies
on what could have happened, rather than what did happen.
When we say "A caused B", we mean "If A hadn't happened B wouldn't have
happened." We use our conception of causality to find the connection
between variables, and we use that connection to change A and thereby change
B. Fundamentally, the human conception of causality is about how
to change the future, not about how the past happened.

When you ask why some event happened, the only true and complete answer
is "The Universe", because if any part of the Universe had been different
(22), things
would have happened differently. There's no objective way to single
out a particular element of that Universe as being "most responsible" -
the way the human mind handles it is by picking out the element that varies
the most, the element easiest to manipulate.

You might say that even if all our choices are written in some great
book, we are the writing, and we are still responsible for our choices.

The truth is, there is absolutely nothing you can do that will make
you deserve pain. Saddam Hussein doesn't deserve so much as a stubbed
toe. Pain is never a good thing, no matter who it happens
to, even Adolf Hitler. Pain is bad; if it's ultimately meaningful,
it's almost certainly as a negative goal. Nothing any human being
can do will flip that sign from negative to positive.

So why do we throw people in jail? To discourage crime.
Choosing evil doesn't make a person deserve anything wrong, but
it makes ver targetable, so that if something bad has to
happen to someone, it may as well happen to ver. Adolf Hitler, for
example, is so targetable that we could shoot him on the off-chance that
it would save someone a stubbed toe. There's never a point where
we can morally take pleasure in someone else's pain. But human society
doesn't require hatred to function - just law.

Of course not. You cannot "serve" God. You don't serve entities.
You serve purposes. Asking "What is the meaning of life?"
and getting back "God" is like asking "What is two plus two?" and getting
back "Spackling paste." It's not even a religious issue. It's
a category error, pure and simple. When I ask what two plus two equals,
I expect a number. When I ask what the meaning of life is, I expect
a goal. That doesn't mean that God can't exist and be a goal
in some sense I don't understand at all, because the Universe is a weird
place; but it does mean that equating God with a goal will lead you to
make a lot of silly mistakes by trying to "serve God" the way you'd serve
another human being.

If you're religious and you want to be really hubristic, you can say:
"Serve God? Of course not, but I serve the same purpose God does."

At present, nowhere, just like physicists don't invoke God while explaining
General Relativity or quantum mechanics or the first minutes of the Big
Bang. This explanation isn't intended to be a complete account of
the Universe; there are a good many things that are far beyond its scope.
I'm flattered you think I've gotten so close to the ultimate reality that
God just has to be in there somewhere or the theory is wrong.
But I haven't. If there's one thing my speculations have taught me
about reality, it's that it goes on and on and on. If I slapped
"God" on top of the parts I knew about, I'd just be refusing to look deeper
- and wouldn't that be disappointing if there were just one or two more
levels to go?

"I believe in God because there is nothing else to explain
how the stars stay in their courses..."
- Maimonides, in his Guide
to the Perplexed.
[I'm not
sure this attribution is correct.]

"Your Highness, I have no need of this hypothesis."
- Pierre Laplace, to Napoleon,
on why
his works on celestial mechanics make no mention of God.

This happens all the time. Somebody comes up with an incomplete explanation
of the Universe that doesn't include God; then, some theologian uses "God"
as a sort of spackling paste to fill in the holes, and manages to convince
others that that's part of the religion; then, when in due course the quest
for knowledge discovers the real explanation, there's this big fight.
It happened with astronomy and it happened with human evolution.
Would you really want it to happen here?

"Soul" is a blatantly overused term that conflates the following completely
independent conceptual entities:

Immortal soul: An entity generated by forces within the brain, which
survives the destruction of the neurons that originally generated it, and
is in some formulations intrinsically indestructible under the laws of
the ultimate reality. (If this soul continues independent, internally
generated cognition equalling the capabilities of a physical brain, someone
has a lot of explaining to do to with respect to split-brain patients,
lobotomy patients, amnesiacs, and other forms of brain damage.)

Extraphysical soul: An entity which operates outside the laws of
physics. (Strictly speaking this doesn't make logical sense, since
anything that affects physical reality is part of physical law, but under
some circumstances we might find it useful to separate that law into two
parts - for example, if some physical patterns obey mathematical rules
and others are totally resistant to rational analysis.)

Weird-physics neurology: Neural information-processing that uses
the "weird" laws of physics. "Weird" is any physical pattern not
visible in everyday, macroscopic life, or any pattern which isn't Turing-computable.
We generally don't use the word "soul" in discussing this possibility.

Morally-valent soul: A physical entity representing the atomic unit
of decision-making and moral responsibility. I'm reasonably sure
this doesn't exist except as a high-level game-theoretical abstraction
embodied as an "atomic" element of social cognition.

Mind-state preservation: Let's say our descendants/successors invent
a time machine (or a limited version thereof such as a "time camera") and
read out everyone's complete neural diagram, memories, etc. at the moment
of death. That would be one form of mind-state preservation; any
immortal soul that preserved memories, or information from which memories
could be reconstructed, would also count.

Self-continuity: "If you go into a duplicator and two beings come
out, which one is you? Is a perfect duplicate of your brain you?
Does continuity of identity require continuity of awareness or just continuity
of memories?" Et cetera, et cetera, ad nauseam. I don't think
such questions have real answers; or rather, the answer is whatever you
decide it is. Though John K Clark's decision is worth mentioning:
"I am not a noun, I am an adjective. I am anything that behaves in
a John-K-Clarkish way."

It's at least conceptually possible that we have all these things,
each as separate entities. For example, our brains might generate
a structure of ordinary matter and energy that survives death but doesn't
contain any useful information; our brain might also utilize noncomputable
physical laws, simply to speed up information-processing, without that
being intrinsic to qualia; we might have qualia generated by ordinary information-processing;
our mind-state might be preserved by friendly aliens with time-cameras,
or preserved at death by beings running our Universe as a computer simulation;
God could place a part of Verself in each of us but translate it into ordinary
neurocode running on a neurological module; and so on. Unfortunately,
the confusion on these issues now runs so deep that any discovery
in any of these areas would be taken to confirm the existence of
an immortal extraphysical morally-valent et-cetera soul.

Depends on your starting assumptions, obviously, as well as your personal
definition of self-continuity. (Virtually all religions believe that
the important part of us survives, so if you're religious and you're using
the basic tenets of your religions as starting assumptions, then the answer
is obviously "Yes".)

Do we have intrinsically, physically immortal souls generated by, or
attached to, the human brain? I dunno. Go open up a brain and
take a look. At the current rate of technological progress in physics
and neurology, we should be able to give a definitive answer to this question
in about forty or fifty years CRNS (24).

Weird-physics neurology is almost certainly required, but not
sufficient, for intrinsic immortality. I would strongly caution against
assuming that proof of weird-physics neurology implies an immortal soul
- unless you believe that the weird neurology was deliberately designed
with that outcome in mind, there's no reason why one would imply the other.
That said, there are some scientists of known competence, physicists
and neurologists, arguing in favor of weird-physics neurology - Penrose
and Hameroff, for example. See Shadows
of the Mind, Chapter 7, for examples.

Is this Universe a computer simulation? If so, do the simulators
care enough to yank us out of it when we die? I don't know.
I don't think this world is a simulation, but I could be wrong.
Are there aliens overhead, restrained by Star Trek's Prime Directive from
intervention, but recording our every thought for posterity? Probably
not, but that's just a guess.

What about the aliens, or our own descendants, armed with time cameras?
I think time cameras should be possible. In fact, actual time machines
should be possible. Certain physicists to the contrary, a blind prejudice
against "global causality violations" is not an argument sufficient to
overcome the fact that a closed timelike curve - time travel - is explicitly
permitted by General Relativity. This one gets even more complicated
than the Fermi Paradox or the Matrix Hypothesis, since we don't know any
of the rules for time travel. It does appear that, under most theories,
you can't go back to a time before you built the time machine, which is
bad news for dead people; on the other hand, we might be able to find an
existing time machine or a natural phenomenon (like a rotating black hole)
that could be used to go back to before the dawn of human sentience.

Or if your definition of personal identity is based on similarity, "identity"
of memories and personalities and motives, or even perfect similarity on
the atomic level, it may be that the Reality is simply so huge that
all your key characteristics will be duplicated somewhere - by pure
quantum randomness, if nothing else. If the Reality has, say, 3^^^^3
Universes - those little arrows are Knuth
notation - then any possible configuration of 10^80 atoms in a Universe
10^11 light-years wide would exist somewhere, not just once but
duplicated an unthinkably vast number of times, with a probability that
is, effectively, certainty. (Knuth notation creates some pretty impressive
numbers.)

With all that exotic speculation going on, cryonics
may seem diminished, rather un-glamorous. But in simple, practical,
pragmatic terms, in the world known to today's science, without speculating
about whatever weird things lie beyond, cryonics is the simplest, cheapest,
most understandable, and in fact only way to increase your probability
of personal immortality - aside from actually living directly into the
Golden Age, of course. I haven't tried to figure out all the factors
involved, but I believe I once read - quoting from memory - "For a hundred
bucks a month, I figure I'm buying a 20% increase in my chances of living
forever." I don't see any reason to dispute that. So if you
care about immortality, make that backup.

The quest for a higher meaning is something I've never had the misfortune
of experiencing. I was eleven years old when I first opened a book
called Great
Mambo Chicken and the Transhuman Condition, thus learning that human
civilization was heading towards a much better standard of living
for everyone. I was eleven years old when my Midwest Talent Search
results confirmed to me that I could make a difference to the future.
By the time I hit thirteen, I may not have known about the Singularity,
or about Interim logic, but I did know that there was a point to human
civilization, and that I had a part to play in it. I do not know,
except by imagination and observation, what it's like to not know
one's place in existence.

After I wrote down the first version of the Interim logic and realized
that it counted as a formal solution to the Meaning of Life, it occurred
to me that there were a lot of people who really cared about that answer,
who were spending a lot of time looking for that answer, and feeling mental
anguish on account of not finding it, and maybe slashing their wrists,
and I really ought to notify them - but there was always something more
important to do, which is why I didn't write anything for two years.
Sorry about that. But, as of 1999, here it is.

Since then, I admit, I've added other purposes to this Web page as well
- used the polls to get some approximate feedback on how people react to
the Singularity, even used the FAQ as an evangelical tool to promote the
Singularity and recruit potential Singularitarians.

Now that the Singularity Institute
has been incorporated (as of July 2000), the site may even generate some
donations. So I suppose that I now have an "ulterior motive" for
wanting you to believe all this. But the vast majority of the FAQ
was written, posted, and linked to Ask Jeeves, more than a year before
the Singularity Institute existed.

The primary purpose of the FAQ was, and remains, healing some of the
pain in the world that's caused by not knowing why to get up in the morning.

The Singularity: I ran across the Singularity in a book called
"True Names and Other Dangers", by Vernor Vinge, who invented the term.
Essentially, I read the second paragraph on p. 47 (25) and thought: "Yep, he's right.
Okay, now I know what I'm going to do with the rest of my life."

But by way of attribution, please note that Vinge only advocates the
view that intelligence increase will break down our model of the
future. Mine is the blame for advocating the cosmological perspective,
the idea that this happens to every race and will happen to us. However,
all credit for invention remains Vinge's - his Hugo-winning science-fiction
novel "A Fire Upon The Deep",
and "Marooned in Realtime", both
take place on a galactic canvas.

The Meaning of Life: I had a practical use for the answer, to
wit: Designing an AI goal system. If you want a real answer,
there has to be a real problem with experimentally testable criteria for
success or failure. There's probably some sort of law that states
that a philosophical problem cannot be solved until the solution has practical
ramifications. Nobody that I know of has deduced "The Meaning Of
Life" by spending all day looking for it, but you can design an AI goal
system to be safe, sane, stable, and self-knowing, then translate into
human terms.

The other questions are just interesting tidbits I happen to know.
In the course of trying to design an intelligent mind, I've picked up a
great deal of knowledge about subjects generally considered inscrutable.
I figured they were Frequently Asked Questions about Life, the Universe,
and Everything, if not about The Meaning Of Life per se, so I tucked
them in.

"There's something quite sinister in AltaVista proffering this
as an answer to an online query, as if the search engine itself was on
its way to becoming William Gibson's nightmarish AI, Wintermute."
-- Nick Montfort in FEED
Daily

Ask Jeeves is an Internet search company
that provides natural-language parsing of questions, combined with a database
of questions to which "Jeeves" knows the answer. They license their
technology to AltaVista (though
recently AltaVista seems to have stopped using it). This is the answer
Ask Jeeves has in their database for "What is the meaning of life?"

This site is not affiliated with Ask Jeeves or Altavista in any way.
I did not pay them for the link, they did not pay me to put up the site,
my opinions are not theirs, their opinions are not mine, you get the idea.

That said, I think Ask Jeeves is a
wonderful concept and ask.com is one of my
favorite search engines (26).
Considering the favor Ask Jeeves did me in linking to this FAQ, I'm glad
to say that a number of people have written to say how impressed they were
that Ask Jeeves or AltaVista had an answer to the question "What is the
meaning of life?" So if you're reading this, Jeeves, you linked to
the right page.

The other interesting "Meaning of Life" site on the 'Net is The
Meaning of Life by Diogenes, which has reasonably intelligent answers
to several other questions that are often meant by people who ask "What
is the meaning of life?"

For more about transhumanism, Extropy, ultratechnology, and the other
things that make life fun, I most highly recommend Extropians
and other Transhumans, the Anders
Transhuman Page, and the transhumanist
FAQ. These pages (27) are the ones that transformed
my life - vast information nexuses leading to more beautiful and important
things than I had dreamed existed.

If I've made a difference in your life, I'd enjoy a note
telling me so, though I can't guarantee I'll write back. And if you
want to help out with the Singularity, I'll have your email address around
when and if there's an opportunity.

1: There are around four
hundred billion stars in the Milky Way, and around sixty billion galaxies
in the Universe. The generally cited estimate is ten to the twenty-second
stars, give or take a factor of ten.

2: I use the generous estimate of a hundred billion neurons,
with around a thousand synapses apiece, sending around two hundred signals
per second, plus a factor of five for good luck. For a much more
detailed (albeit outdated) analysis, see "When
will computer hardware match the human brain?"

3: See
Singularity
Analysis for a more detailed visualization of how the curves for intelligence,
program efficiency, and computing power interact.

4: Obviously, one should not design an assembler
that can reproduce using abundantly available natural materials!

5: A year (365 and 1/4 days) is 31,557,600 seconds; so, after a
million-to-one speedup, one subjective year would pass every 31 seconds.

7: Simplicity is always desirable. Every
element of any justification always has the possibility of being wrong.
Hence Occam's Razor.

8: Or rather, an "Interim Meaning of Life".
This is a technical term, explained later, which reflects the use of probabilistic
logic.

9: Understand,
this is nothing intrinsic to the professions themselves; someone who writes
an advertisement for a genuinely superior product is breaking even, but
not somebody who sits around all day discussing whether an advertising
jingle for beer projects an image that fits in with the corporate mission
statement.

10: Many other people may have had equal or greater indirect effects,
as measured by what we'd have lost if they'd been hit by a truck.
Einstein's mother, for example. But since Einstein's father is just
as necessary, the significance is shared. Hence the phrase, "concentrated
significance".

11: Drexler is on the list for
his seminal role in the creation of the transhumanist movements and hypertext
- i.e., the World Wide Web. In the event of grey goo eating the planet,
Dr. Drexler will have the dubious honor of being the human with the greatest
negative significance. Good luck, Eric!

12: Actually,
these all have some built-in emotional substrate as well, but you get the
idea.

14: It's almost as bad as "emotionless" androids who act like severely
repressed humans, or the godforsaken stereotype that highly intelligent
people can't understand emotions.

15: This assumes Unknown1
is greater than zero; since we don't know enough about Unknowns to prove
they're zero in reality, reasoning treats them as nonzero. Obviously,
it can't be negative, since it represents a probability.

16: Some people disagree
with that last part. They are, in fact, wrong. (17). But even so, very
few people think that being more intelligent makes you intrinsically
less
moral. So, when you run the model through the algebraic goal system,
it's enough to create the differential of desirability that lets you make
choices (see below).

17: Intelligence
isn't just high-speed arithmetic, or a better memory, or winning at chess,
or other stereotypical party tricks. Intelligence is self-awareness,
and wisdom, and the ability to not be stupid, and other things that
alter every aspect of the personality.

18: I'm not sure this is the moral thing
to do, but all else being equal, I'm for it.

19: You know, like
"the set of all sets that do not contain themselves" or "this sentence
is false". If you don't know, go read Gödel,
Escher, Bachright now.

20: Let me emphasize that this
is strictly a personal attitude. I am not claiming that this attitude
is objectively correct. And I am definitely not claiming that it
will work for anyone other than me. For all I know, your lack of
willpower can be instantly cured by Reboxetine.

21: This ignores the question of whether altruism out of fear of punishment
counts. (It does, but you probably won't have as much fun.)

22: Any part in the past light cone of the event, anyway. If
the light from an event hasn't reached you, that event hasn't "officially"
happened yet. It's a Special Relativity thing.

23: Actually, we've all
adopted it because we're born with that assumption built into our brains,
but you get the idea.

24: CRNS stands for "Current Rate
No Singularity". Roughly, "at the current rate".

25: "Here I had
tried a straightforward extrapolation of technology, and found myself precipitated
over an abyss. It's a problem we face every time we consider the
creation of intelligences greater than our own. When this happens,
human history will have reached a kind of singularity - a place where extrapolation
breaks down and new models must be applied - and the world will pass beyond
our understanding."

Featured Essay:

Twelve Virtues of Rationality

The first virtue is curiosity. A burning itch to know is higher than a solemn vow to pursue truth. To feel the burning itch of curiosity requires both that you be ignorant, and that you desire to relinquish your ignorance.Read More