The problem's roots lie in algorithmic information theory and formal epistemology, but finding answers will require us to wade into debates on everything from theoretical physics to anthropic reasoning and self-reference. This post will lay the groundwork for a sequence of posts (titled 'Artificial Naturalism') introducing different aspects of this OPFAI.

AI perception and belief: A toy model

A more concrete problem: Construct an algorithm that, given a sequence of the colors cyan, magenta, and yellow, predicts the next colored field.

Colors: CYYM CYYY CYCM CYYY ????

This is an instance of the general problem 'From an incomplete data series, how can a reasoner best make predictions about future data?'. In practice, any agent that acquires information from its environment and makes predictions about what's coming next will need to have two map-like1 subprocesses:

1. Something that generates the agent's predictions, its expectations. By analogy with human scientists, we can call this prediction-generator the agent's hypotheses or beliefs.

2. Something that transmits new information to the agent's prediction-generator so that its hypotheses can be updated. Employing another anthropomorphic analogy, we can call this process the agent's data or perceptions.

On the face of it, there is a tension in adhering both to the idea that there are facts about what it's rational for people to do and to the idea that natural or scientific facts are all the facts there are. The aim of this post is just to try to make clear why this should be so, and hopefully to get feedback on what people think of the tension.

In short

To a first approximation, a belief is rational just in case you ought to hold it; an action rational just in case you ought to take it. A person is rational to the extent that she believes and does what she ought to. Being rational, it is fair to say, is a normative or prescriptive property, as opposed to a merely descriptive one. Natural science, on the other hand, is concerned merely with descriptive properties of things -what they weigh, how they are composed, how they move, and so on. On the face of it, being rational is not the sort of property about which we can theorize scientifically (that is, in the vocabulary of the natural sciences). To put the point another way, rationality concerns what a thing (agent) ought to do, natural science concerns only what it is and will do, and one cannot deduce 'ought' from 'is'.

At greater length

There are at least two is/ought problems, or maybe two ways of thinking about the is/ought problem. The first problem (or way of thinking about the one problem) is posed from a subjective point of view. I am aware that things are a certain way, and that I am disposed to take some course of action, but neither of these things implies that I ought to take any course of action -neither, that is, implies that taking a given course of action would in any sense be right. How do I justify the thought that any given action is the one I ought to take? Or, taking the thought one step further, how, attending only to my own thoughts, do I differentiate merely being inclined to do something from being bound by some kind of rule or principle or norm, to do something?

This is an interesting question -one which gets to the very core of the concept of being justified, and hence of being rational (rational beliefs being justified beliefs). But it isn't the problem of interest here.

The second problem, the problem of interest, is evident from a purely objective, scientific point of view. Consider a lowly rock. By empirical investigation, we can learn its mass, its density, its mineralogical composition, and any number of other properties. Now, left to their own devices, rocks don't do much of anything, comparatively speaking, so it isn't surprising that we don't expect there to be anything it ought to do. In any case, natural science does not imply there is anything it ought to do, I think most will agree.

Consider then a virus particle - a complex of RNA and ancillary molecules. Natural science can tell us how it wiil behave in various circumstances -whether and how it will replicate itself, and so on- but once again surely there is nothing in biochemistry, genetics or other science which implies there is anything our very particle ought to do. It's true that we may think of it as having the goal to replicate itself, and consider it to have made a mistake if it replicates itself inaccurately, but these conceptions do not issue from science. Any sense in which it ought to do something, or is wrong or mistaken in acting in a given way, is surely purely metaphorical (no?).

How about a bacterium? It's orders of magnitude more complicated, but I don't see that matters are any different as regards what it ought to do. Science has nothing to tell us about what if anything is important to a bacterium, as distinct from what it will tend to do.

Moving up the evolutionary ladder, does the introduction of nervous systems make any difference? What do we think about, say, nematodes or even horseshoe crabs? The feedback mechanisms underlying the self-regulatory processes in such animals may be leaps and bounds more sophisticated than in their non-neural forebears, but it's far from clear how such increasing complexity could introduce goals.

To cut to the chase, how can matters be any different with the members of Homo sapiens ? Looked at from a properly scientific point of view, is there any scope for the attribution of purposes or goals or the appraisal of our behaviour in any sense as right or wrong? I submit that a mere increase in complexity -even if by many orders of magnitude- does not turn the trick. To be clear, I'm not claiming there are no such facts -far from it- just that these facts cannot be articulated in the language of purely natural science.

TL;DR Summary: Mathematical truths can be cashed out as combined claims about 1) the common conception of the rules of how numbers work, and 2) whether the rules imply a particular truth.This cashing-out keeps them purely about the physical world and eliminates the need to appeal to an immaterial realm, as some mathematicians feel a need to.

Background: "I am quite confident that the statement 2 + 3 = 5 is true; I am far less confident of what it means for a mathematical statement to be true." -- Eliezer Yudkowsky

This is the problem I will address here: how should a rationalist regard the status of mathematical truths?In doing so, I will present a unifying approach that, I contend, elegantly solves the following related problems:

An epistemic difficulty

Like many readers of this blog, I am a materialist. Like many still, I was not always. Long ago, the now-rhetorical ponderings in the preceding post in fact delivered the fatal blow to my nagging suspicion that somehow, materialism just isn't enough.

By materialism, I mean the belief that the world and people are composed entirely of something called matter (a.k.a. energy), which physics currently best understands as consisting of particles (a.k.a. waves). If physics reformulates these notions, materialism can adjust with it, leading some to prefer the term "physicalism".

Now, I encounter people all the time who, because of education or disillusionment, have abandoned most aspects of religion, yet still believe in more than one than one kind of reality. It's often called "being spiritual". People often think it feels better than the alternative (see Joy in the merely real), but it also persists for what people experience as an epistemic concern:

The inability to reconcile the "experiencing self" concept with one's notion of physical reality.

Okay, so you don't exactly believe in the God of the Abrahamic scriptures verbatim who punishes and sets things on fire and lives in the sky. But still, there just has to be something more than just matter and energy, doesn't there? You just feel it. If you don't, try to remember when you did, or at least empathize with someone you know who does. After all, you have a mind, you think, you feel — you feel for crying out loud — and you must realize that can't be made entirely of things like carbon and hydrogen atoms, which are basically just dots with other dots swirling around them. Okay, maybe they're waves, but at least sometimes they act like dots. Start with a few swirling dots… now add more… keep going, until it equals love. It just doesn't seem to capture it.

In fact, now that you think about it, you know your mind exists. It's right there: it's you. Your "experiencing self". Maybe you call it a spirit or soul; I don't want to fix too rigid a description in case it wouldn't quite match your own. But cogito-ergo-sum, it's definitely there! By contrast, this particle business is just a mathematical concept — a very smart one, of course — thought of by scientists to explain and predict a bunch of carefully designed and important measurements. Yes, it does that extremely well, and you're not downplaying that. But that doesn't explain how you see blue, or taste strawberry — something you have direct access to. Particles might not even exist, if that means anything to say. It might just be that observation itself follows a mathematical pattern that we can understand better by visualizing dots and waves. They might not be real.

So actually, your mind or spirit — that thing you feel, that you — is much more certain an extant than scientific "matter". That must be something very important to understand! Certainly you can tell your mind has different parts to it: hearing, seeing, reasoning, moving, remembering, empathizing, picturing, yearning… When you think of all the things you can remember alone — or could remember — the complexity of all that data is mindbogglingly vast. Imagine the task of actually having to take it all apart and describe it completely… it could take aeons…

There is a subproblem of Friendly AI which is so scary that I usually don't talk about it, because very few would-be AI designers would react to it appropriately—that is, by saying, "Wow, that does sound like an interesting problem", instead of finding one of many subtle ways to scream and run away.

This is the problem that if you create an AI and tell it to model the world around it, it may form models of people that are people themselves. Not necessarily the same person, but people nonetheless.

If you look up at the night sky, and see the tiny dots of light that move over days and weeks—planētoi, the Greeks called them, "wanderers"—and you try to predict the movements of those planet-dots as best you can...

Historically, humans went through a journey as long and as wandering as the planets themselves, to find an accurate model. In the beginning, the models were things of cycles and epicycles, not much resembling the true Solar System.

But eventually we found laws of gravity, and finally built models—even if they were just on paper—that were extremely accurate so that Neptune could be deduced by looking at the unexplained perturbation of Uranus from its expected orbit. This required moment-by-moment modeling of where a simplified version of Uranus would be, and the other known planets. Simulation, not just abstraction. Prediction through simplified-yet-still-detailed pointwise similarity.

Suppose you have an AI that is around human beings. And like any Bayesian trying to explain its enivornment, the AI goes in quest of highly accurate models that predict what it sees of humans.

Models that predict/explain why people do the things they do, say the things they say, want the things they want, think the things they think, and even why people talk about "the mystery of subjective experience".

The model that most precisely predicts these facts, may well be a 'simulation' detailed enough to be a person in its own right.

Creativity, we've all been told, is about Jumping Out Of The System, as Hofstadter calls it (JOOTSing for short). Questioned assumptions, violated expectations.

Fire is dangerous: the rule of fire is to run away from it. What must have gone through the mind of the first hominid to domesticate fire? The rule of milk is that it spoils quickly and then you can't drink it - who first turned milk into cheese? The rule of computers is that they're made with vacuum tubes, fill a room and are so expensive that only corporations can own them. Wasn't the transistor a surprise...

Who, then, could put laws on creativity? Who could bound it, who could circumscribe it, even with a concept boundary that distinguishes "creativity" from "not creativity"? No matter what system you try to lay down, mightn't a more clever person JOOTS right out of it? If you say "This, this, and this is 'creative'" aren't you just making up the sort of rule that creative minds love to violate?

Why, look at all the rules that smart people have violated throughout history, to the enormous profit of humanity. Indeed, the most amazing acts of creativity are those that violate the rules that we would least expect to be violated.

Is there not even creativity on the level of how to think? Wasn't the invention of Science a creative act that violated old beliefs about rationality? Who, then, can lay down a law of creativity?

As you may recall from some months earlier, I think that part of the rationalist ethos is binding yourself emotionally to an absolutely lawfulreductionistic universe—a universe containing no ontologically basic mental things such as souls or magic—and pouring all your hope and all your care into that merely real universe and its possibilities, without disappointment.

There's an old trick for combating dukkha where you make a list of things you're grateful for, like a roof over your head.

For example, suppose that instead of one eye, you possessed a magical second eye embedded in your forehead. And this second eye enabled you to see into the third dimension—so that you could somehow tell how far away things were—where an ordinary eye would see only a two-dimensional shadow of the true world. Only the possessors of this ability can accurately aim the legendary distance-weapons that kill at ranges far beyond a sword, or use to their fullest potential the shells of ultrafast machinery called "cars".

"Binocular vision" would be too light a term for this ability. We'll only appreciate it once it has a properly impressive name, like Mystic Eyes of Depth Perception.

"One of your very early philosophers came to the conclusion that a fully competent mind, from a study of one fact or artifact belonging to any given universe, could construct or visualize that universe, from the instant of its creation to its ultimate end..." —First Lensman"If any one of you will concentrate upon one single fact, or small object, such as a pebble or the seed of a plant or other creature, for as short a period of time as one hundred of your years, you will begin to perceive its truth." —Gray Lensman

I am reasonably sure that a single pebble, taken from a beach of our own Earth, does not specify the continents and countries, politics and people of this Earth. Other planets in space and time, other Everett branches, would generate the same pebble. On the other hand, the identity of a single pebble would seem to include our laws of physics. In that sense the entirety of our Universe—all the Everett branches—would be implied by the pebble. (If, as seems likely, there are no truly free variables.)

So a single pebble probably does not imply our whole Earth. But a single pebble implies a very great deal. From the study of that single pebble you could see the laws of physics and all they imply. Thinking about those laws of physics, you can see that planets will form, and you can guess that the pebble came from such a planet. The internal crystals and molecular formations of the pebble formed under gravity, which tells you something about the planet's mass; the mix of elements in the pebble tells you something about the planet's formation.

I am not a geologist, so I don't know to which mysteries geologists are privy. But I find it very easy to imagine showing a geologist a pebble, and saying, "This pebble came from a beach at Half Moon Bay", and the geologist immediately says, "I'm confused" or even "You liar". Maybe it's the wrong kind of rock, or the pebble isn't worn enough to be from a beach—I don't know pebbles well enough to guess the linkages and signatures by which I might be caught, which is the point.

Today's post is a tad gloomier than usual, as I measure such things. It deals with a thought experiment I invented to smash my own optimism, after I realized that optimism had misled me. Those readers sympathetic to arguments like, "It's important to keep our biases because they help us stay happy," should consider not reading. (Unless they have something to protect, including their own life.)

So! Looking back on the magnitude of my own folly, I realized that at the root of it had been a disbelief in the Future's vulnerability—a reluctance to accept that things could really turn out wrong. Not as the result of any explicit propositional verbal belief. More like something inside that persisted in believing, even in the face of adversity, that everything would be all right in the end.

Some would account this a virtue (zettai daijobu da yo), and others would say that it's a thing necessary for mental health.

But we don't live in that world. We live in the world beyond the reach of God.