I’ve never been a fan of the notion that we should (normatively) have a discount rate in our pure preferences – as opposed to a pseudo-discount rate arising from monetary inflation, or from opportunity costs of other investments, or from various probabilistic catastrophes that destroy resources or consumers. The idea that it is literally, fundamentally 5% more important that a poverty-stricken family have clean water in 2008, than that a similar family have clean water in 2009, seems like pure discrimination to me – just as much as if you were to discriminate between blacks and whites.

But doesn’t discounting at market rates of return suggest we should do almost nothing to help far future folk, and isn’t that crazy? No, it suggests:

Usually the best way to help far future folk is to invest now to give them resources they can spend as they wish.

Almost no one now in fact cares much about far future folk, or they would have bid up the price (i.e., market return) to much higher levels.

Very distant future times are ridiculously easy to help via investment. A 2% annual return adds up to a googol (10^100) return over 12,000 years, even if there is only a 1/1000 chance they will exist or receive it.

So if you are not incredibly eager to invest this way to help them, how can you claim to care the tiniest bit about them? How can you think anyone on Earth so cares? And if no one cares the tiniest bit, how can you say it is “moral” to care about them, not just somewhat, but almost equally to people now? Surely if you are representing a group, instead of spending your own wealth, you shouldn’t assume they care much.

Yudkowsky’s argument is idealistic, while Hanson is attempting to be realistic. I will look at this from a different point of view. Hanson is right, and Yudkowsky is wrong, for a still more idealistic reason than Yudkowsky’s reasons. In particular, a temporal discount rate is logically and mathematically necessary in order to have consistent preferences.

Suppose you have the chance to save 10 lives a year from now, or 2 years from now, or 3 years from now etc., such that your mutually exclusive options include the possibility of saving 10 lives x years from now for all x.

At first, it would seem to be consistent for you to say that all of these possibilities have equal value by some measure of utility.

The problem does not arise from this initial assignment, but it arises when we consider what happens when you act in this situation. Your revealed preferences in that situation will indicate that you prefer things nearer in time to things more distant, for the following reason.

It is impossible to choose a random integer without a bias towards low numbers, for the same reasons we argued here that it is impossible to assign probabilities to hypotheses without, in general, assigning simpler hypotheses higher probabilities. In a similar way, if “you will choose 2 years from now”, “you will choose 10 years from now,” “you will choose 100 years from now,” are all assigned probabilities, they cannot all be assigned equal probabilities, but you must be more likely to choose the options less distant in time, in general and overall. There will be some number n such that there is a 99.99% chance that you will choose some number of years less than n, and and a probability of 0.01% that you will choose n or more years, indicating that you have a very strong preference for saving lives sooner rather than later.

Someone might respond that this does not necessarily affect the specific value assignments, in the same way that in some particular case, we can consistently think that some particular complex hypothesis is more probable than some particular simple hypothesis. The problem with this is the hypotheses do not change their complexity, but time passes, making things distant in time become things nearer in time. Thus, for example, if Yudkowsky responds, “Fine. We assign equal value to saving lives for each year from 1 to 10^100, and smaller values to the times after that,” this will necessarily lead to dynamic inconsistency. The only way to avoid this inconsistency is to apply a discount rate to all periods of time, including ones in the near, medium, and long term future.

I have tended to emphasize common sense as a basic source in attempting to philosophize or otherwise understand reality. Let me explain what I mean by the idea of common sense.

The basic idea is that something is common sense when everyone agrees that something is true. If we start with this vague account, something will be more definitively common sense to the degree that it is truer that everyone agrees, and likewise to the degree that it is truer that everyone agrees.

If we consider anything that one might think of as a philosophical view, we will find at least a few people who disagree, at least verbally, with the claim. But we may be able to find some that virtually everyone agrees with. These pertain more to common sense than things that fewer people agree with. Likewise, if we consider everyday claims rather than philosophical ones, we will probably be able to find things that everyone agrees with apart from some very localized contexts. These pertain even more to common sense. Likewise, if everyone has always agreed with something both in the past and present, that pertains more to common sense than something that everyone agrees with in the present, but where some have disagreed in the past.

It will be truer that everyone agrees in various ways: if everyone is very certain of something, that pertains more to common sense than something people are less certain about. If some people express disagreement with a view, but everyone’s revealed preferences or beliefs indicate agreement, that can be said to pertain to common sense to some degree, but not so much as where verbal affirmations and revealed preferences and beliefs are aligned.

Naturally, all of this is a question of vague boundaries: opinions are more or less a matter of common sense. We cannot sort them into two clear categories of “common sense” and “not common sense.” Nonetheless, we would want to base our arguments, as much as possible, on things that are more squarely matters of common sense.

We can raise two questions about this. First, is it even possible? Second, why do it?

One might object that the proposal is impossible. For no one can really reason except from their own opinions. Otherwise, one might be formulating a chain of argument, but it is not one’s own argument or one’s own conclusion. But this objection is easily answered. In the first place, if everyone agrees on something, you probably agree yourself, and so reasoning from common sense will still be reasoning from your own opinions. Second, if you don’t personally agree, since belief is voluntary, you are capable of agreeing if you choose, and you probably should, for reasons which will be explained in answering the second question.

Nonetheless, the objection is a reasonable place to point out one additional qualification. “Everyone agrees with this” is itself a personal point of view that someone holds, and no one is infallible even with respect to this. So you might think that everyone agrees, while in fact they do not. But this simply means that you have no choice but to do the best you can in determining what is or what is not common sense. Of course you can be mistaken about this, as you can about anything.

Why argue from common sense? I will make two points, a practical one and a theoretical one. The practical point is that if your arguments are public, as for example this blog, rather than written down in a private journal, then you presumably want people to read them and to gain from them in some way. The more you begin from common sense, the more profitable your thoughts will be in this respect. More people will be able to gain from your thoughts and arguments if more people agree with the starting points.

There is also a theoretical point. Consider the statement, “The truth of a statement never makes a person more likely to utter it.” If this statement were true, no one could ever utter it on account of its truth, but only for other reasons. So it is not something that a seeker of truth would ever say. On the other hand, there can be no doubt that the falsehood of some statements, on some occasions, makes those statements more likely to be affirmed by some people. Nonetheless, the nature of language demands that people have an overall tendency, most of the time and in most situations, to speak the truth. We would not be able to learn the meaning of a word without it being applied accurately, most of the time, to the thing that it means. In fact, if everyone was always uttering falsehoods, we would simply learn that “is” means “is not,” and that “is not,” means “is,” and the supposed falsehoods would not be false in the language that we would acquire.

It follows that greater agreement that something is true, other things being equal, implies that the thing is more likely to be actually true. Stones have a tendency to fall down: so if we find a great collection of stones, the collection is more likely to be down at the bottom of a cliff rather than perched precisely on the tip of a mountain. Likewise, people have a tendency to utter the truth, so a great collection of agreement suggests something true rather than something false.

Of course, this argument depends on “other things being equal,” which is not always the case. It is possible that most people agree on something, but you are reasonably convinced that they are mistaken, for other reasons. But if this is the case, your arguments should depend on things that they would agree with even more strongly than they agree with the opposite of your conclusion. In other words, it should be based on things which pertain even more to common sense. Suppose it does not: ultimately the very starting point of your argument is something that everyone else agrees is false. This will probably be an evident insanity from the beginning, but let us suppose that you find it reasonable. In this case, Robin Hanson’s result discussed here implies that you must be convinced that you were created in very special circumstances which would guarantee that you would be right, even though no one else was created in these circumstances. There is of course no basis for such a conviction. And our ability to modify our priors, discussed there, implies that the reasonable behavior is to choose to agree with the priors of common sense, if we find our natural priors departing from them, except in cases where the disagreement is caused by agreement with even stronger priors of common sense. Thus for example in this post I gave reasons for disagreeing with our natural prior on the question, “Is this person lying or otherwise deceived?” in some cases. But this was based on mathematical arguments that are even more convincing than that natural prior.

1) The few who have read the posts on this blog from the beginning, in chronological order, and who are now reading this one simply because it is the only one you have not read yet.

2) The vast majority who did not do the above.

For the first category, I don’t have any particular suggestion at the moment. Well done. That is the right way of reading this blog.

For the second category, you would do much better to stop right here in the middle of this post (without even finishing it), go back to the beginning, and read every post in chronological order.

….

So you are now in the first category? No? Since obviously you did not take my advice, let me explain both why you should, and why you will not.

It is possible to understand something through arguments, even if manipulating symbols may be an even more common result. And since conclusions follow from premises, you can only do this by thinking about the premises first, and the conclusions second. Since my own interest is in understanding things, I intentionally organize the blog in this way. Of course, since the concrete historical process of an individual coming to understand some particular thing is messier and more complicated than a single argument or even than multiple arguments, the order isn’t an exact representation of my own history or someone else’s potential history. But it is certainly closer to that than any other order of reading would be.

You will object that you do not have the time to read 300 blog posts. Fine. But then why do you have time to read this one? Even if you are definitely committed to reading a small number of posts, you would do better to read a small number from the beginning. If you are committed to reading not more than one post a week, you would do better to read the 300 posts over the next six years, rather than reading the posts that are current.

You might think of other similar objections, but they will all fail in similar ways. If you are actually interested in understanding something from your reading, chronological order is the right order.

Of course, other blog authors might well argue in similar ways, but the number of people who actually do this, on any blog, is tiny. Instead, people read a few recent posts, and perhaps a few others if there are a chain of links that lead them there. But they do not, in the vast majority of cases, read from the beginning, whether to read all or only a part.

So let me explain why you will not take this advice, despite the fact that it is irrefutably correct. In The Elephant in the Brain, Robin Hanson and Kevin Simler remark in a chapter on conversation:

This view of talking—as a way of showing off one’s “backpack”—explains the puzzles we encountered earlier, the ones that the reciprocal-exchange theory had trouble with. For example, it explains why we see people jockeying to speak rather than sitting back and “selfishly” listening—because the spoils of conversation don’t lie primarily in the information being exchanged, but rather in the subtextual value of finding good allies and advertising oneself as an ally. And in order to get credit in this game, you have to speak up; you have to show off your “tools.”

…

But why do speakers need to be relevant in conversation? If speakers deliver high-quality information, why should listeners care whether the information is related to the current topic? A plausible answer is that it’s simply too easy to rattle off memorized trivia. You can recite random facts from the encyclopedia until you’re blue in the face, but that does little to advertise your generic facility with information.

Similarly, when you meet someone for the first time, you’re more eager to sniff each other out for this generic skill, rather than to exchange the most important information each of you has gathered to this point in your lives. In other words, listeners generally prefer speakers who can impress them wherever a conversation happens to lead, rather than speakers who steer conversations to specific topics where they already know what to say.

Hanson and Simler are trying to explain various characteristics of conversation, such as the fact that people are typically more interested in speaking than in listening, as well as the requirement that conversational participants “stick to the topic.”

Later, they associate this with people’s interest in news:

Why have humans long been so obsessed with news? When asked to justify our strong interest, we often point to the virtues of staying apprised of the important issues of the day. During a 1945 newspaper strike in New York, for example, when the sociologist Bernard Berelson asked his fellow citizens, “Is it very important that people read the newspaper?” almost everyone answered with a “strong ‘yes,’ ” and most people cited the “ ‘serious’ world of public affairs.”

…

Now, it did make some sense for our ancestors to track news as a way to get practical information, such as we do today for movies, stocks, and the weather. After all, they couldn’t just go easily search for such things on Google like we can. But notice that our access to Google hasn’t made much of a dent in our hunger for news; if anything we read more news now that we have social media feeds, even though we can find a practical use for only a tiny fraction of the news we consume.

There are other clues that we aren’t mainly using the news to be good citizens (despite our high-minded rhetoric). For example, voters tend to show little interest in the kinds of information most useful for voting, including details about specific policies, the arguments for and against them, and the positions each politician has taken on each policy. Instead, voters seem to treat elections more like horse races, rooting for or against different candidates rather than spending much effort to figure out who should win. (See Chapter 16 for a more detailed discussion on politics.)

…

These patterns in behavior may be puzzling when we think of news as a source of useful information. But they make sense if we treat news as a larger “conversation” that extends our small-scale conversation habits. Just as one must talk on the current topic in face-to-face conversation, our larger news conversation also maintains a few “hot” topics—a focus so strong and so narrow that policy wonks say that there’s little point in releasing policy reports on topics not in the news in the last two weeks. (This is the criterion of relevance we saw earlier.)

The argument here suggests that blog readers will tend to prefer reading current posts to old ones because this is to remain more “relevant,” and that such relevance is necessary in order to impress other conversational participants. This, I suggest, is why you will not take my advice, despite its rightness. If you think this is an insulting explanation, just bear in mind that blog authors are even more insulted by Hanson’s and Simler’s explanations, since the reader at least is listening.

Nagel replies in the pages of NYRB (8 June 2017; HT: Dave Lull) to one Roy Black, a professor of bioengineering:

The mind-body problem that exercises both Daniel Dennett and me is a problem about what experience is, not how it is caused. The difficulty is that conscious experience has an essentially subjective character—what it is like for its subject, from the inside—that purely physical processes do not share. Physical concepts describe the world as it is in itself, and not for any conscious subject. That includes dark energy, the strong force, and the development of an organism from the egg, to cite Black’s examples. But if subjective experience is not an illusion, the real world includes more than can be described in this way.

I agree with Black that “we need to determine what ‘thing,’ what activity of neurons beyond activating other neurons, was amplified to the point that consciousness arose.” But I believe this will require that we attribute to neurons, and perhaps to still more basic physical things and processes, some properties that in the right combination are capable of constituting subjects of experience like ourselves, to whom sunsets and chocolate and violins look and taste and sound as they do. These, if they are ever discovered, will not be physical properties, because physical properties, however sophisticated and complex, characterize only the order of the world extended in space and time, not how things appear from any particular point of view.

The problem might be condensed into an aporetic triad:

1) Conscious experience is not an illusion.

2) Conscious experience has an essentially subjective character that purely physical processes do not share.

3) The only acceptable explanation of conscious experience is in terms of physical properties alone.

Take a little time to savor this problem. Note first that the three propositions are collectively inconsistent: they cannot all be true. Any two limbs entail the negation of the remaining one. Note second that each limb exerts a strong pull on our acceptance. But we cannot accept them all because they are logically incompatible.

Which proposition should we reject? Dennett, I take it, would reject (1). But that’s a lunatic solution as Professor Black seems to appreciate, though he puts the point more politely. When I call Dennett a sophist, as I have on several occasions, I am not abusing him; I am underscoring what is obvious, namely, that the smell of cooked onions, for example, is a genuine datum of experience, and that such phenomenological data trump scientistic theories.

Sophistry aside, we either reject (2) or we reject (3). Nagel and I accept (1) and (2) and reject (3). Black, and others of the scientistic stripe, accept (1) and (3) and reject (2).

In order to see the answer to this, we can construct a Parmenidean parallel to Vallicella’s aporetic triad:

1) Distinction is not an illusion.

2) Being has an essentially objective character of actually being that distinction does not share (considering that distinction consists in the fact of not being something.)

3) The only acceptable explanation of distinction is in terms of being alone (since there is nothing but being to explain things with.)

Parmenides rejects (1) here. What approach would Vallicella take? If he wishes to take a similarly analogous approach, he should accept (1) and (2), and deny (3). And this would be a pretty commonsense approach, and perhaps the one that most people implicitly adopt if they ever think about the problem.

At the same time, it is easy to see that (3) is approximately just as obviously true as (1); and it is for this reason that Parmenides sees rejecting (1) and accepting (2) and (3) as reasonable.

The correct answer, of course, is that the three are not inconsistent despite appearances. In fact, we have effectively answered this in recentposts. Distinction is not an illusion, but a way that we understand things, as such. And being a way of understanding, it is not (as such) a way of being mistaken, and thus it is not an illusion, and thus the first point is correct. Again, being a way of understanding, it is not a way of being as such, and thus the second point is correct. And yet distinction can be explained by being, since there is something (namely relationship) which explains why it is reasonable to think in terms of distinctions.

Vallicella’s triad mentions “purely physical processes” and “physical properties,” but the idea of “physical” here is a distraction, and is not really relevant to the problem. Consider the following from another post by Vallicella:

If I understand Galen Strawson’s view, it is the first. Conscious experience is fully real but wholly material in nature despite the fact that on current physics we cannot account for its reality: we cannot understand how it is possible for qualia and thoughts to be wholly material. Here is a characteristic passage from Strawson:

Serious materialists have to be outright realists about the experiential. So they are obliged to hold that experiential phenomena just are physical phenomena, although current physics cannot account for them. As an acting materialist, I accept this, and assume that experiential phenomena are “based in” or “realized in” the brain (to stick to the human case). But this assumption does not solve any problems for materialists. Instead it obliges them to admit ignorance of the nature of the physical, to admit that they don’t have a fully adequate idea of what the physical is, and hence of what the brain is. (“The Experiential and the Non-Experiential” in Warner and Szubka, p. 77)

Strawson and I agree on two important points. One is that what he calls experiential phenomena are as real as anything and cannot be eliminated or reduced to anything non-experiential. Dennett denied! The other is that there is no accounting for experiential items in terms of current physics.

I disagree on whether his mysterian solution is a genuine solution to the problem. What he is saying is that, given the obvious reality of conscious states, and given the truth of naturalism, experiential phenomena must be material in nature, and that this is so whether or not we are able to understand how it could be so. At present we cannot understand how it could be so. It is at present a mystery. But the mystery will dissipate when we have a better understanding of matter.

This strikes me as bluster.

An experiential item such as a twinge of pain or a rush of elation is essentially subjective; it is something whose appearing just is its reality. For qualia, esse = percipi. If I am told that someday items like this will be exhaustively understood from a third-person point of view as objects of physics, I have no idea what this means. The notion strikes me as absurd. We are being told in effect that what is essentially subjective will one day be exhaustively understood as both essentially subjective and wholly objective. And that makes no sense. If you tell me that understanding in physics need not be objectifying understanding, I don’t know what that means either.

Here Vallicella uses the word “material,” which is presumably equivalent to “physical” in the above discussion. But it is easy to see here that being material is not the problem: being objective is the problem. Material things are objective, and Vallicella sees an irreducible opposition between being objective and being subjective. In a similar way, we can reformulate Vallicella’s original triad so that it does not refer to being physical:

1) Conscious experience is not an illusion.

2) Conscious experience has an essentially subjective character that purely objective processes do not share.

3) The only acceptable explanation of conscious experience is in terms of objective properties alone.

It is easy to see that this formulation is the real source of the problem. And while Vallicella would probably deny (3) even in this formulation, it is easy to see why people would want to accept (3). “Real things are objective,” they will say. If you want to explain anything, you should explain it using real things, and therefore objective things.

The parallel with the Parmenidean problem is evident. We would want to explain distinction in terms of being, since there isn’t anything else, and yet this seems impossible, so one (e.g. Parmenides) is tempted to deny the existence of distinction. In the same way, we would want to explain subjective experience in terms of objective facts, since there isn’t anything else, and yet this seems impossible, so one (e.g. Dennett) is tempted to deny the existence of subjective experience.

Just as the problem is parallel, the correct solution will be almost entirely parallel to the solution to the problem of Parmenides.

1) Conscious experience is not an illusion. It is a way of perceiving the world, not a way of not perceiving the world, and definitely not a way of not perceiving at all.

2) Consciousness is subjective, that is, it is a way that an individual perceives the world, not a way that things are as such, and thus not an “objective fact” in the sense that “the way things are” is objective.

3) The “way things are”, namely the objective facts, are sufficient to explain why individuals perceive the world. Consider again this post, responding to a post by Robin Hanson. We could reformulate his criticism to express instead Parmenides’s criticism of common sense (changed parts in italics):

People often state things like this:

I am sure that there is not just being, because I’m aware that some things are not other things. I know that being just isn’t non-being. So even though there is being, there must be something more than that to reality. So there’s a deep mystery: what is this extra stuff, where does it arise, how does it change, and so on. We humans care about distinctions, not just being; we want to know what out there is distinct from which other things.

But consider a key question: Does this other distinction stuff interact with the parts of our world that actually exist strongly and reliably enough to usually be the actual cause of humans making statements of distinction like this?

If yes, this is a remarkably strong interaction, making it quite surprising that philosophers, possibly excepting Duns Scotus, have missed it so far. So surprising in fact as to be frankly unbelievable. If this type of interaction were remotely as simple as all the interactions we know, then it should be quite understandable with existing philosophy. Any interaction not so understandable would have be vastly more difficult to understand than any we’ve ever seen or considered. Thus I’d bet heavily and confidently that no one will understand such an interaction.

But if no, if this interaction isn’t strong enough to explain human claims of distinction, then we have a remarkable coincidence to explain. Somehow this extra distinction stuff exists, and humans also have a tendency to say that it exists, but these happen for entirely independent reasons. The fact that distinction stuff exists isn’t causing people to claim it exists, nor vice versa. Instead humans have some sort of weird psychological quirk that causes them to make such statements, and they would make such claims even if distinction stuff didn’t exist. But if we have a good alternate explanation for why people tend to make such statements, what need do we have of the hypothesis that distinction stuff actually exists? Such a coincidence seems too remarkable to be believed.

“Distinction stuff”, of course, does not exist, and neither does “feeling stuff.” But some things are distinct from others. Saying this is a way of understanding the world, and it is a reasonable way to understand the world because things exist relative to one another. And just as one thing is distinct from another, people have experiences. Those experiences are ways of knowing the world (broadly understood.) And just as reality is sufficient to explain distinction, so reality is sufficient to explain the fact that people have experiences.

How exactly does this answer the objection about interaction? In the case of distinction, the fact that “one thing is not another” is never the direct cause of anything, not even of the fact that “someone believes that one thing is not another.” So there would seem to be a “remarkable coincidence” here, or we would have to say that since the fact seems unrelated to the opinion, there is no reason to believe people are right when they make distinctions.

The answer in the case of distinction is that one thing is related to another, and this fact is the cause of someone believing that one thing is not another. There is no coincidence, and no reason to believe that people are mistaken when they make distinctions, despite the fact that distinction as such causes nothing.

In a similar way, “a human being is what it is,” and “a human being does what it does” (taken in an objective sense), cause human beings to say and believe that they have subjective experience (taking saying and believing to refer to objective facts.) But this is precisely where the zombie question arises: they say and believe that they have subjective experience, when we interpret say and believe in the objective sense. But do they actually say and believe anything, considering saying and believing as including the subjective factor? Namely, when a non-zombie says something, it subjectively understands the meaning of what it is saying, and when it consciously believes something, it has a subjective experience of doing that, but these things would not apply to a zombie.

But notice that we can raise a similar question about zombie distinctions. When someone says and believes that one thing is not another, objective reality is similarly the cause of them making the distinction. But is the one thing actually not the other? But there is no question at all here except of whether the person’s statement is true or false. And indeed, someone can say, e.g, “The person who came yesterday is not the person who came today,” and this can sometimes be false. In a similar way, asking whether an apparent person is a zombie or not is just asking whether their claim is true or false when they say they have a subjective experience. The difference is that if the (objective) claim is false, then there is no claim at all in the subjective sense of “subjectively claiming something.” It is a contradiction to subjectively make the false claim that you are subjectively claiming something, and thus, this cannot happen.

Someone may insist: you yourself, when you subjectively claim something, cannot be mistaken for the above reason. But you have no way to know whether someone else who apparently is making that claim, is actually making the claim subjectively or not. This is the reason there is a hard problem.

How do we investigate the case of distinction? If we want to determine whether the person who came yesterday is not the person who came today, we do that by looking at reality, despite the fact that distinction as such is not a part of reality as such. If the person who came yesterday is now, today, a mile away from the person who came today, this gives us plenty of reason to say that the one person is not the other. There is nothing strange, however, in the fact that there is no infallible method to prove conclusively, once and for all, that one thing is definitely not another thing. There is not therefore some special “hard problem of distinction.” This is just a result of the fact that our knowledge in general is not infallible.

In a similar way, if we want to investigate whether something has subjective experience or not, we can do that only by looking at reality: what is this thing, and what does it do? Then suppose it makes an apparent claim that it has subjective experience. Obviously, for the above reasons, this cannot be a subjective claim but false: so the question is whether it makes a subjective claim and is right, or rather makes no subjective claim at all. How would you answer this as an external observer?

In the case of distinction, the fact that someone claims that one thing is distinct from another is caused by reality, whether the claim is true or false. So whether it is true or false depends on the way that it is caused by reality. In a similar way, the thing which apparently and objectively claims to possess subjective experience, is caused to do so by objective facts. Again, as in the case of distinction, whether it is true or false will depend on the way that it is caused to do so by objective facts.

We can give some obvious examples:

“This thing claims to possess subjective experience because it is a human being and does what humans normally do.” In this case, the objective and subjective claim is true, and is caused in the right way by objective facts.

“This thing claims to possess subjective experience because it is a very simple computer given a very simple program to output ‘I have subjective experience’ on its screen.” In this case the external claim is false, and it is caused in the wrong way by objective facts, and there is no subjective claim at all.

But how do you know for sure, someone will object. Perhaps the computer really is conscious, and perhaps the apparent human is a zombie. But we could similarly ask how we can know for sure that the person who came yesterday isn’t the same person who came today, even though they appear distant from each other, because perhaps the person is bilocating?

It would be mostly wrong to describe this situation by saying “there really is no hard problem of consciousness,” as Robin Hanson appears to do when he says, “People who think they can conceive of such zombies see a ‘hard question’ regarding which physical systems that claim to feel and otherwise act as if they feel actually do feel.” The implication seems to be that there is no hard question at all. But there is, and the fact that people engage in this discussion proves the existence of the question. Rather, we should say that the question is answerable, and that one it has been answered the remaining questions are “hard” only in the sense that it is hard to understand the world in general. The question is hard in exactly the way the question of Parmenides is hard: “How is it possible for one thing not to be another, when there is only being?” The question of consciousness is similar: “How is it possible for something to have subjective experience, when there are only objective things?” And the question can and should be answered in a similar fashion.

It would be virtually impossible to address every related issue in a simple blog post of this form, so I will simply mention some things that I have mainly set aside here:

1) The issue of formal causes, discussed more in my earlier treatment of this issue. This is relevant because “is this a zombie?” is in effect equivalent to asking whether the thing lacks a formal cause. This is worthy of a great deal of consideration and would go far beyond either this post or the earlier one.

2) The issue of “physical” and “material.” As I stated in this post, this is mainly a distraction. Most of the time, the real question is how the subjective is possible given that we believe that the world is objective. The only relevance of “matter” here is that it is obvious that a material thing is an objective thing. But of course, an immaterial thing would also have to be objective in order to be a thing at all. Aristotle and many philosophers of his school make the specific argument that the human mind does not have an organ, but such arguments are highly questionable, and in my view fundamentally flawed. My earlier posts suffice to call such a conclusion into question, but do not attempt to disprove it, and the the topic would be worthy of additional consideration.

3) Specific questions about “what, exactly, would actually be conscious?” Now neglecting such questions might seem to be a cop-out, since isn’t this what the whole problem was supposed to be in the first place? But in a sense we did answer it. Take an apparent claim of something to be conscious. The question would be this: “Given how it was caused by objective facts to make that claim, would it be a reasonable claim for a subjective claimer to make?” In other words, we cannot assume in advance that it is subjectively making a claim, but if it would be a reasonable claim, it will (in general) be a true one, and therefore also a subjective one, for the same reason that we (in general) make true claims when we reasonably claim that one thing is not another. We have not answered this question only in the same sense that we have not exhaustively explained which things are distinct from which other things, and how one would know. But the question, e.g., “when if ever would you consider an artificial intelligence to be conscious?” is in itself also worthy of direct discussion.

Now, intelligence could plausibly be a vague property. But it is not plausible that consciousness is a vague property. So, there must be some precise transition point in reliability needed for computation to yield consciousness, so that a slight decrease in reliability—even when the actual functioning is unchanged (remember that the Ci are all functioning in the same way)—will remove consciousness.

I responded in the comments there:

The transition between being conscious and not being conscious that happens when you fall asleep seems pretty vague. I don’t see why you find it implausible that “being conscious” could be vague in much the same way “being red” or “being intelligent” might be vague. In fact the evidence from experience (falling asleep etc) seems to directly suggest that it is vague.

Pruss responds:

When I fall asleep, I may become conscious of less and less. But I can’t get myself to deny that either it is definitely true at any given time that I am at least a little conscious or it is definitely true that I am not at all conscious.

But we cannot trust Pruss’s intuitions about what can be vague or otherwise. Pruss claims in an earlier post that there is necessarily a sharp transition between someone’s not being old and someone’s being old. I discussed that post here. This is so obviously false that it gives us a reason in general not to trust Alexander Pruss on the issue of sharp transitions and vagueness. The source of this particular intuition may be the fact that you cannot subjectively make a claim, even vaguely, without some subjective experience, as well as his general impression that vagueness violates the principles of excluded middle and non-contradiction. But in a similar way, you cannot be vaguely old without being somewhat old. This does not mean that there is a sharp transition from not being old to being old, and likewise it does not necessarily mean that there is a sharp transition from not having subjective experience to having it.

While I have discussed the issue of vagueness elsewhere on this blog, this will probably continue to be a reoccurring feature, if only because of those who cannot accept this feature of reality and insist, in effect, on “this or nothing.”

And so, one day, Mary’s captors decided it was time for her to see colors. As a trick, they prepared a bright blue banana to present as her first color experience ever. Mary took one look at it and said “Hey! You tried to trick me! Bananas are yellow, but this one is blue!” Her captors were dumfounded. How did she do it? “Simple,” she replied. “You have to remember that I know everything—absolutely everything—that could ever be known about the physical causes and effects of color vision. So of course before you brought the banana in, I had already written down, in exquisite detail, exactly what physical impression a yellow object or a blue object (or a green object, etc.) would make on my nervous system. So I already knew exactly what thoughts I would have (because, after all, the “mere disposition” to think about this or that is not one of your famous qualia, is it?). I was not in the slightest surprised by my experience of blue (what surprised me was that you would try such a second-rate trick on me). I realize it is hard for you to imagine that I could know so much about my reactive dispositions that the way blue affected me came as no surprise. Of course it’s hard for you to imagine. It’s hard for anyone to imagine the consequences of someone knowing absolutely everything physical about anything!”

I don’t intend to fully analyze this scenario here, and for that reason I left it to the reader in the previous post. However, I will make two remarks, one on what is right (or possibly right) about this continuation, and one on what might be wrong about this continuation.

The basically right or possibly right element is that if we assume that Mary knows all there is to know about color, including in its subjective aspect, it is reasonable to believe (even if not demonstrable) that she will be able to recognize the colors the first time she sees them. To gesture vaguely in this direction, we might consider that the color red can be somewhat agitating, while green and blue can be somewhat calming. These are not metaphorical associations, but actual emotional effects that they can have. Thus, if someone can recognize how their experience is affecting their emotions, it would be possible for them to say, “this seems more like the effect I would expect of green or blue, rather than red.” Obviously, this is not proving anything. But then, we do not in fact know what it is like to know everything there is to know about anything. As Dennett continues:

Surely I’ve cheated, you think. I must be hiding some impossibility behind the veil of Mary’s remarks. Can you prove it? My point is not that my way of telling the rest of the story proves that Mary doesn’t learn anything, but that the usual way of imagining the story doesn’t prove that she does. It doesn’t prove anything; it simply pumps the intuition that she does (“it seems just obvious”) by lulling you into imagining something other than what the premises require.

It is of course true that in any realistic, readily imaginable version of the story, Mary would come to learn something, but in any realistic, readily imaginable version she might know a lot, but she would not know everything physical. Simply imagining that Mary knows a lot, and leaving it at that, is not a good way to figure out the implications of her having “all the physical information”—any more than imagining she is filthy rich would be a good way to figure out the implications of the hypothesis that she owned everything.

By saying that the usual way of imagining the story “simply pumps the intuition,” Dennett is neglecting to point out what is true about the usual way of imagining the situation, and in that way he makes his own account seem less convincing. If Mary knows in advance all there is to know about color, then of course if she is asked afterwards, “do you know anything new about color?”, she will say no. But if we simply ask, “Is there anything new here?”, she will say, “Yes, I had a new experience which I never had before. But intellectually I already knew all there was to know about that experience, so I have nothing new to say about it. Still, the experience as such was new.” We are making the same point here as in the last post. Knowing a sensible experience intellectually is not to know in the mode of sense knowledge, but in the mode of intellectual knowledge. So if one then engages in sense knowledge, there will be a new mode of knowing, but not a new thing known. Dennett’s account would be clearer and more convincing if he simply agreed that Mary will indeed acknowledge something new; just not new knowledge.

In relation to what I said might be wrong about the continuation, we might ask what Dennett intended to do in using the word “physical” repeatedly throughout this account, including in phrases like “know everything physical” and “all the physical information.” In my explanation of the continuation, I simply assume that Mary understands all that can be understood about color. Dennett seems to want some sort of limitation to the “physical information” that can be understood about color. But either this is a real limitation, excluding some sorts of claims about color, or it is no limitation at all. If it is not a limitation, then we can simply say that Mary understands everything there is to know about color. If it is a real limitation, then the continuation will almost certainly fail.

I suspect that the real issue here, for Dennett, is the suggestion of some sort of reductionism. But reductionism to what? If Mary is allowed to believe things like, “Most yellows typically look brighter than most blue things,” then the limit is irrelevant, and Mary is allowed to know anything that people usually know about colors. But if the meaning is that Mary knows this only in a mathematical sense, that is, that she can have beliefs about certain mathematical properties of light and surfaces, rather than beliefs that are explicitly about blue and yellow things, then it will be a real limitation, and this limitation would cause his continuation to fail. We have basically the same issue here that I discussed in relation to Robin Hanson on consciousness earlier. If all of Mary’s statements are mathematical statements, then of course she will not know everything that people know about color. “Blue is not yellow” is not a mathematical statement, and it is something that we know about color. So we already know from the beginning that not all the knowledge that can be had about color is mathematical. Dennett might want to insist that it is “physical,” and surely blue and yellow are properties of physical things. If that is all he intends to say, namely that the properties she knows are properties of physical things, there is no problem here, but it does look like he intends to push further, to the point of possibly asserting something that would be evidently false.

Suppose I see a man approaching from a long way off. “That man is pretty tall,” I say to a companion. The man approaches, and we meet him. Now I can see how tall he is. Suppose my companion asks, “Were you right that the man is pretty tall, or were you mistaken?”

“Pretty tall,” of course, is itself “pretty vague,” and there surely is not some specific height in inches that would be needed in order for me to say that I was right. What then determines my answer? Again, I might just respond, “It’s hard to say.” But in some situations I would say, “yes, I was definitely right,” or “no, I was definitely wrong.” What are those situations?

Psychologically, I am likely to determine the answer by how I feel about what I know about the man’s height now, compared to what I knew in advance. If I am surprised at how short he is, I am likely to say that I was wrong. And if I am not surprised at all by his height, or if I am surprised at how tall he is, then I am likely to say that I was right. So my original pretty vague statement ends up being made somewhat more precise by being placed in relationship with my expectations. Saying that he is pretty tall implies that I have certain expectations about his height, and if those expectations are verified, then I will say that I was right, and if those expectations are falsified, at least in a certain direction, then I will say that I was wrong.

This might suggest a theory like logical positivism. The meaning of a statement seems to be defined by the expectations that it implies. But it seems easy to find a decisive refutation of this idea. “There are stars outside my past and future light cones,” for example, is undeniably meaningful, and we know what it means, but it does not seem to imply any particular expectations about what is going to happen to me.

But perhaps we should simply somewhat relax the claim about the relationship between meaning and expectations, rather than entirely retracting it. Consider the original example. Obviously, when I say, “that man is pretty tall,” the statement is a statement about the man. It is not a statement about what is going to happen to me. So it is incorrect to say that the meaning of the statement is the same as my expectations. Nonetheless, the meaning in the example receives something, at the least some of its precision, from my expectations. Different people will be surprised by different heights in such a case, and it will be appropriate to say that they disagree somewhat about the meaning of “pretty tall.” But not because they had some logical definition in their minds which disagreed with the definition in someone’s else’s mind. Instead, the difference of meaning is based on the different expectations themselves.

But does a statement always receive some precision in its meaning from expectation, or are there cases where nothing at all is received from one’s expectations? Consider the general claim that “X is true.” This in fact implies some expectations: I do not expect “someone omniscient will tell me that X is false.” I do not expect that “someone who finds out the truth about X will tell me that X is false.” I do not expect that “I will discover the truth about X and it will turn out that it was false.” Note that these expectations are implied even in cases like the claim about the stars and my future light cone. Now the hopeful logical positivist might jump in at this point and say, “Great. So why can’t we go back to the idea that meaning is entirely defined by expectations?” But returning to that theory would be cheating, so to speak, because these expectations include the abstract idea of X being true, so this must be somehow meaningful apart from these particular expectations.

These expectations do, however, give the vaguest possible framework in which to make a claim at all. And people do, sometimes, make claims with little expectation of anything besides these things, and even with little or no additional understanding of what they are talking about. For example, in the cases that Robin Hanson describes as “babbling,” the person understands little of the implications of what he is saying except the idea that “someone who understood this topic would say something like this.” Thus it seems reasonable to say that expectations do always contribute something to making meaning more precise, even if they do not wholly constitute one’s meaning. And this consequence seems pretty natural if it is true that expectation is itself one of the most fundamental activities of a mind.

Nonetheless, the precision that can be contributed in this way will never be an infinite precision, because one’s expectations themselves cannot be defined with infinite precision. So whether or not I am surprised by the man’s height in the original example, may depend in borderline cases on what exactly happens during the time between my original assessment and the arrival of the man. “I will be surprised” or “I will not be surprised” are in themselves contingent facts which could depend on many factors, not only on the man’s height. Likewise, whether or not my state actually constitutes surprise will itself be something that has borderline cases.

What if self-deception helps us be happy? What if just running out and overcoming bias will make us—gasp!—unhappy? Surely, true wisdom would be second-order rationality, choosing when to be rational. That way you can decide which cognitive biases should govern you, to maximize your happiness.

Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen.

Second-order rationality implies that at some point, you will think to yourself, “And now, I will irrationally believe that I will win the lottery, in order to make myself happy.” But we do not have such direct control over our beliefs. You cannot make yourself believe the sky is green by an act of will. You might be able to believe you believed it—though I have just made that more difficult for you by pointing out the difference. (You’re welcome!) You might even believe you were happy and self-deceived; but you would not in fact be happy and self-deceived.

For second-order rationality to be genuinely rational, you would first need a good model of reality, to extrapolate the consequences of rationality and irrationality. If you then chose to be first-order irrational, you would need to forget this accurate view. And then forget the act of forgetting. I don’t mean to commit the logical fallacy of generalizing from fictional evidence, but I think Orwell did a good job of extrapolating where this path leads.

You can’t know the consequences of being biased, until you have already debiased yourself. And then it is too late for self-deception.

The other alternative is to choose blindly to remain biased, without any clear idea of the consequences. This is not second-order rationality. It is willful stupidity.

There are several errors here. The first is the denial that belief is voluntary. As I remarked in the comments to this post, it is best to think of “choosing to believe a thing” as “choosing to treat this thing as a fact.” And this is something which is indeed voluntary. Thus for example it is by choice that I am, at this very moment, treating it as a fact that belief is voluntary.

There is some truth in Yudkowsky’s remark that “you cannot make yourself believe the sky is green by an act of will.” But this is not because the thing itself is intrinsically involuntary. On the contrary, you could, if you wished, choose to treat the greenness of the sky as a fact, at least for the most part and in most ways. The problem is that you have no good motive to wish to act this way, and plenty of good motives not to act this way. In this sense, it is impossible for most of us to believe that the sky is green in the same way it is impossible for most of us to commit suicide; we simply have no good motive to do either of these things.

Yudkowsky’s second error is connected with the first. Since, according to him, it is impossible to deliberately and directly deceive oneself, self-deception can only happen in an indirect manner: “The other alternative is to choose blindly to remain biased, without any clear idea of the consequences. This is not second-order rationality. It is willful stupidity.” The idea is that ordinary beliefs are simply involuntary, but we can have beliefs that are somewhat voluntary by choosing “blindly to remain biased, without any clear idea of the consequences.” Since this is “willful stupidity,” a reasonable person would completely avoid such behavior, and thus all of his beliefs would be involuntary.

Essentially, Yudkowsky is claiming that we have some involuntary beliefs, and that we should avoid adding any voluntary beliefs to our involuntary ones. This view is fundamentally flawed precisely because all of our beliefs are voluntary, and thus we cannot avoid having voluntary beliefs.

Nor is it “willful stupidity” to trade away some truth for the sake of other good things. Completely avoiding this is in fact intrinsically impossible. If you are seeking one good, you are not equally seeking a distinct good; one cannot serve two masters. Thus since all people are interested in some goods distinct from truth, there is no one who fails to trade away some truth for the sake of other things. Yudkowsky’s mistake here is related to his wishful thinking about wishful thinking which I discussed previously. In this way he views himself, at least ideally, as completely avoiding wishful thinking. This is both impossible and unhelpful, impossible in that everyone has such motivated beliefs, and unhelpful because such beliefs can in fact be beneficial.

Once we have a clear view of this matter, we can use this to minimize the loss of truth that results from such beliefs. For example, in a post linked above, we discussed the argument that fictional accounts consistently distort one’s beliefs about reality. Rather than pretending that there is no such effect, we can deliberately consider to what extent we wish to be open to this possibility, depending on our other purposes for engaging with such accounts. This is not “willful stupidity”; the stupidity would to be engage in such trades without realizing that such trades are inevitable, and thus not to realize to what extent you are doing it.

Consider one of the cases of voluntary belief discussed in this earlier post. As we quoted at the time, Eric Reitan remarks:

For most horror victims, the sense that their lives have positive meaning may depend on the conviction that a transcendent good is at work redeeming evil. Is the evidential case against the existence of such a good really so convincing that it warrants saying to these horror victims, “Give up hope”? Should we call them irrational when they cling to that hope or when those among the privileged live in that hope for the sake of the afflicted? What does moral decency imply about the legitimacy of insisting, as the new atheists do, that any view of life which embraces the ethico-religious hope should be expunged from the world?

Here, Reitan is proposing that someone believe that “a transcendent good is at work redeeming evil” for the purpose of having “the sense that their lives have positive meaning.” If we look at this as it is, namely as proposing a voluntary belief for the sake of something other than truth, we can find ways to minimize the potential conflict between accuracy and this other goal. For example, the person might simply believe that “my life has a positive meaning,” without trying to explain why this is so. For the reasons given here, “my life has a positive meaning” is necessarily more probable and more known than any explanation for this that might be adopted. To pick a particular explanation and claim that it is more likely would be to fall into the conjunction fallacy.

Of course, real life is unfortunately more complicated. The woman in Reitan’s discussion might well respond to our proposal somewhat in this way (not a real quotation):

Probability is not the issue here, precisely because it is not a question of the truth of the matter in itself. There is a need to actually feel that one’s life is meaningful, not just to believe it. And the simple statement “life is meaningful” will not provide that feeling. Without the feeling, it will also be almost impossible to continue to believe it, no matter what the probability is. So in order to achieve this goal, it is necessary to believe a stronger and more particular claim.

And this response might be correct. Some such goals, due to their complexity, might not be easily achieved without adopting rather unlikely beliefs. For example, Robin Hanson, while discussing his reasons for having opinions, several times mentions the desire for “interesting” opinions. This is a case where many people will not even notice the trade involved, because the desire for interesting ideas seems closely related to the desire for truth. But in fact truth and interestingness are diverse things, and the goals are diverse, and one who desires both will likely engage in some trade. In fact, relative to truth seeking, looking for interesting things is a dangerous endeavor. Scott Alexander notes that interesting things are usually false:

This suggests a more general principle: interesting things should usually be lies. Let me give three examples.

I wrote in Toxoplasma of Rage about how even when people crusade against real evils, the particular stories they focus on tend to be false disproportionately often. Why? Because the thousands of true stories all have some subtleties or complicating factors, whereas liars are free to make up things which exactly perfectly fit the narrative. Given thousands of stories to choose from, the ones that bubble to the top will probably be the lies, just like on Reddit.

Every time I do a links post, even when I am very careful to double- and triple- check everything, and to only link to trustworthy sources in the mainstream media, a couple of my links end up being wrong. I’m selecting for surprising-if-true stories, but there’s only one way to get surprising-if-true stories that isn’t surprising, and given an entire Internet to choose from, many of the stories involved will be false.

And then there’s bad science. I can’t remember where I first saw this, so I can’t give credit, but somebody argued that the problem with non-replicable science isn’t just publication bias or p-hacking. It’s that some people will be sloppy, biased, or just stumble through bad luck upon a seemingly-good methodology that actually produces lots of false positives, and that almost all interesting results will come from these people. They’re the equivalent of Reddit liars – if there are enough of them, then all of the top comments will be theirs, since they’re able to come up with much more interesting stuff than the truth-tellers. In fields where sloppiness is easy, the truth-tellers will be gradually driven out, appearing to be incompetent since they can’t even replicate the most basic findings of the field, let alone advance it in any way. The sloppy people will survive to train the next generation of PhD students, and you’ll end up with a stable equilibrium.

In a way this makes the goal of believing interesting things much like the woman’s case. The goal of “believing interesting things” will be better achieved by more complex and detailed beliefs, even though to the extent that they are more complex and detailed, they are simply that much less likely to be true.

The point of this present post, then, is not to deny that some goals might be such that they are better attained with rather unlikely beliefs, and in some cases even in proportion to the unlikelihood of the beliefs. Rather, the point is that a conscious awareness of the trades involved will allow a person to minimize the loss of truth involved. If you never look at your bank account, you will not notice how much money you are losing from that monthly debit for internet. In the same way, if you hold Yudkowksy’s opinion, and believe that you never trade away truth for other things, which is itself both false and motivated, you are like someone who never looks at your account: you will not notice how much you are losing.