You are feeling extremely lucky. You have just been approached by an eccentric billionaire who has offered you the following deal. He places before you a vial of toxin that, if you drink it, will make you painfully ill for a day, but will not threaten your life or have any lasting effects. (Your spouse, a crack biochemist, confirms the properties of the toxin.) The billionaire will pay you one million dollars tomorrow morning if, at midnight tonight, you intend to drink the toxin tomorrow afternoon. He emphasizes that you need not drink the toxin to receive the money; in fact, the money will already be in your bank account hours before the time for drinking it arrives, if you succeed. (This is confirmed by your daughter, a lawyer, after she examines the legal and financial documents that the billionaire has signed.) All you have to do is sign the agreement and then intend at midnight tonight to drink the stuff tomorrow afternoon. You are perfectly free to change your mind after receiving the money and not drink the toxin. (The presence or absence of the intention is to be determined by the latest ‘mind-reading’ brain scanner and computing device designed by the great Doctor X. As a cognitive scientist, materialist, and former student of Doctor X, you have no doubt that the machine will correctly detect the presence or absence of the relevant intention.)

Confronted with this offer, you gleefully sign the contract, thinking ‘what an easy way to become a millionaire’. Not long afterwards, however, you begin to worry. You had been thinking that you could avoid drinking the toxin and just pocket the million. But you realize that if you are thinking in those terms when midnight rolls around, you will not be intending to drink the toxin tomorrow. So maybe you will actually have to drink the stuff to collect the money. It will not be pleasant, but it is sure worth a day of suffering to become a millionaire.

However, as occurs to you immediately, it cannot really be necessary to drink the toxin to pocket the money. That money will either be or not be in your bank account by 10 a.m. tomorrow, you will know then whether it is there or not, and your drinking or not drinking the toxin hours later cannot affect the completed financial transaction. So instead of planning to drink the toxin, you decide to intend today to drink it and then change your mind after midnight. But if that is your plan, then it is obvious that you do not intend to drink the toxin. (At most you intend to intend to drink it.) For having such an intention is incompatible with planning to change your mind tomorrow morning.

The discussion goes on from here for some time, but the resolution of the puzzle is easier than Kavka realizes. There is only a problem because it is implicitly assumed that the belief that you will or will not drink the toxin is something different from the intention to drink it. But in the case of voluntary actions, these are one and the same. The reason you cannot intend to drink the toxin without thinking that you will end up drinking it is simply that the intention to drink the toxin is the belief that you will end up drinking it. If the brain scanner works correctly, it registers that you intend to drink the toxin if you in fact think you will end up drinking it, and it registers that you do not intend this if you in fact think you will not drink it.

Is there a problem on the practical level? That is, is it possible for someone to get the million, or is it impossible because everyone in such a situation would expect that they would reconsider tomorrow morning, and therefore they will not believe that they will end up drinking it?

Possibly, and for some people. It is entirely possible in some situations that beliefs about what you will in fact do, apparently simply based on facts, entirely prevent certain decisions and intentions. Thus if someone has tried dozens of times in the past to give up smoking, and consistently failed, it will become more and more difficult to intend to give up smoking, and may very well become impossible.

However, Kavka gives a theoretical argument that this should be impossible in the case of his thought experiment:

Thus, we can explain your difficulty in earning a fortune: you cannot intend to act as you have no reason to act, at least when you have substantial reason not to act. And you have (or will have when the time comes) no reason to drink the toxin, and a very good reason not to, for it will make you quite sick for a day.

Again, it may well be that this reasoning would cause an individual to fail to obtain the million. But it is not necessary for this to happen. For the person does have a reason to intend to drink the toxin in the first place: namely, in order to obtain the million. And tomorrow morning their decision, i.e. their belief that they will drink the toxin, will be an efficient cause of them actually drinking the toxin, unless they reconsider. Thus if a person expects to reconsider, they may well fail to obtain the million. But someone wanting to obtain the million will also therefore plan not to reconsider. And tomorrow morning their belief that they will not reconsider will be an efficient cause of them not reconsidering, unless they reconsider their plan not to reconsider. And so on.

Thus, someone can only obtain the million if they plan to drink the toxin, they plan not to reconsider this plan, and so on. And someone with this plan can obtain the million. And maybe they will end up drinking the toxin and maybe they won’t; but the evening before, they believe that they factually will drink it. If they don’t, they fail to obtain the million. And they may well in fact drink it, simply by carrying out the original plan: going about their day without thinking about it, and simply drinking that afternoon, without any additional consideration of reasons to drink or not drink.

There is also a way to obtain the million and avoid drinking, but it cannot happen on purpose. This can happen only in one way: namely, by being lucky. You plan on every level not to reconsider, and expect this to happen, but luckily you end up being mistaken, and you do reconsider, despite expecting not to. In this case you both obtain the million, and avoid the drink.

Now we said in the first post on truth and expectation that in part we seem to determine the meaning of a statement by the expectations that it implies. If this is the case, then why should we not say that the Mormon’s beliefs are definitely true? In fact, in the post linked above about truth in religion, I suggested that people frequently do mean something like this when they say that their own religion is true. Nonetheless, it is easy to see that truth in this particular sense does not imply that each and every claim in the religion, understood as a claim about the world, is true.

But why not, if one’s expectations become more accurate, just as with statements that are clearly true? As I noted in the earlier post, to say that “that man is pretty tall” is a statement about the man. It is not a statement about myself, nor about my expectations, even if the meaning is partly determined by these things. So ultimately the truth or falsehood of the claim about the man is going to be determined by facts about the man, even if they need to be understood as facts about the man in relation to me and my expectations.

Consider again Scott Sumner’s anti-realism. Scott claims that we cannot distinguish between “our perception of reality, and actual reality.” As I said there, this is right in the sense that we cannot consistently hold the opinion, “This is my opinion about reality, but my opinion is false: reality is actually different.” But we can recognize the distinct meanings in “my perception of reality” and “actual reality.” Scott’s failure to recognize this distinction leads him to suggest on occasion that our beliefs about certain matters are just beliefs about what people in the future will believe. For example, he says in this comment:

You misunderstood Rorty. He is not recommending that you try to trick you colleagues into believing something that is not true. Rather he is merely describing what society regards as true. And who can deny that society tends to regard things as true, when they believe them to be true. No you might say “but what society believes to be true is not always really true.” But Rorty would say that statement means nothing, or else it means that you predict that a future society will have a different view of what is true.

Most people, without even realizing it, assume that there is some sort of “God-like” view of what is “really true” which is separate from what we believe is true, and or will believe in the future to be true. Rorty is an atheist. He believes that what society’s experts believe is the best we can do, the closest we can come to describing reality. Rorty would suggest to Hayek “if you want to convince other economists, use persuasive arguments.” I think that is very reasonable advice. It is what I try to do on this blog.

This is much like Bob Seidensticker’s claim that moral beliefs are beliefs about what society in the future will believe to be moral. In both cases, there is an unacceptable circularity. If our belief is about what people in the future will believe, what are the future people’s beliefs about? There is only one credible explanation here: people’s beliefs are about what they say their beliefs are about, namely the very things they are talking about. Moral beliefs are about whether actions are good or bad, and beliefs describing the world are about the world, not about the people who hold the beliefs, present or future.

This does not imply that there is “some sort of ‘God-like’ view of what is ‘really true’.” It just implies that our beliefs are distinct from other things in the world. Sumner is suggesting that a situation where everyone permanently holds a false belief, forever, is inconceivable. But this is quite conceivable, and we can easily see how it could happen. Just now I counted the cups in my cupboard and there are exactly 14 (if anyone is surprised by that value, most of those are plastic.) It is entirely conceivable, however, that I miscounted. And since I also have some cups that are not currently in the cupboard, if I did, I will probably never get it right, since I will just assume there was some other assortment. And since there is presumably no way for the public to discover the truth about this, society will be permanently deluded about the number of cups in my cupboard at 11:16 AM on July 14, 2018.

What would it mean, then, if it was not “actually true” that there were 14 cups in my cupboard? It would be determined by facts about the cups and the cupboard, not by facts about me, about society, about my expectations or society’s expectations. It would not be actually true if there were, in fact, only 13 cups there, even if no one would ever know this.

This all remains related to expectations, however. I don’t think I miscounted, so I think that if I go again and count them, I will get 14 again. If I did miscount, there is a good chance that counting again would result in a different number. Now I don’t intend to bother, so this expectation is counterfactual. Nonetheless, there are at least conceivable counterfactual situations where I would discover that I was mistaken. In a similar way, to say that the Mormon holds false religious beliefs implies that there are at least counterfactual situations where the falsehood would be uncovered by a violation of his expectations: e.g. if he had been alive at the time and had followed Joseph Smith around all the time to see whether there were any golden plates and so on.

Nonetheless, one cannot ultimately separate the truth of a statement from facts about the thing the statement is about. A statement is true if things are the way it says they are, “the way” being correctly understood here. If this were not the case, the statement would not be about the things in the first place.

We discussed this topic in a previous post. I noted there that there is likely some relationship with predictive processing. This idea can be refined by distinguishing between conscious thought and what the human brain does on a non-conscious level.

It is not possible to define truth by reference to expectations for reasons given previously. Some statements do not imply specific expectations, and besides, we need the idea of truth to decide whether or not someone’s expectations were correct or not. So there is no way to define truth except the usual way: a statement is true if things are the way the statement says they are, bearing in mind the necessary distinctions involving “way.”

On the conscious level, I would distinguish between thinking about something is true, and wanting to think that it is true. In a discussion with Angra Mainyu, I remarked that insofar as we have an involuntary assessment of things, it would be more appropriate to call that assessment a desire:

So rather than calling that assessment a belief, it would be more accurate to call it a desire. It is not believing something, but desiring to believe something. Hunger is the tendency to go and get food; that assessment is the tendency to treat a certain claim (“the USA is larger than Austria”) as a fact. And in both cases there are good reasons for those desires: you are benefited by food, and you are benefited by treating that claim as a fact.

Angra was quite surprised by this and responded that “That statement gives me evidence that we’re probably not talking about the same or even similar psychological phenomena – i.e., we’re probably talking past each other.” But if he was talking about anything that anyone at all would characterize as a belief (and he said that he was), he was surely talking about the unshakeable gut sense that something is the case whether or not I want to admit it. So we were, in fact, talking about exactly the same psychological phenomena. I was claiming then, and will claim now, that this gut sense is better characterized as a desire than as a belief. That is, insofar as desire is a tendency to behave in certain ways, it is a desire because it is a tendency to act and think as though this claim is true. But we can, if we want, resist that tendency, just as we can refrain from going to get food when we are hungry. If we do resist, we will refrain from believing what we have a tendency to believe, and if we do not, we will believe what we have a tendency to believe. But the tendency will be there whether or not we follow it.

Now if we feel a tendency to think that something is true, it is quite likely that it seems to us that it would improve our expectations. However, we can also distinguish between desiring to believe something for this reason, or desiring to believe something for other reasons. And although we might not pay attention, it is quite possibly to be consciously aware that you have an inclination to believe something, and also that it is for non-truth related reasons; and thus you would not expect it to improve your expectations.

But this is where it is useful to distinguish between the conscious mind and what the brain is doing on another level. My proposal: you will feel the desire to think that something is true whenever your brain guesses that its predictions, or at least the predictions that are important to it, will become more accurate if you think that the thing is true. We do not need to make any exceptions. This will be the case even when we would say that the statement does not imply any significant expectations, and will be the case even when the belief would have non-truth related motives.

Consider the statement that there are stars outside the visible universe. One distinction we could make even on the conscious level is that this implies various counterfactual predictions: “If you are teleported outside the visible universe, you will see more stars that aren’t currently visible.” Now we might find this objectionable if we were trying to define truth by expectations, since we have no expectation of such an event. But both on conscious and on non-conscious levels, we do need to make counterfactual predictions in order to carry on with our lives, since this is absolutely essential to any kind of planning and action. Now certainly no one can refute me if I assert that you would not see any such stars in the teleportation event. But it is not surprising if my brain guesses that this counterfactual prediction is not very accurate, and thus I feel the desire to say that there are stars there.

Likewise, consider the situation of non-truth related motives. In an earlier discussion of predictive processing, I suggested that the situation where people feel like they have to choose a goal is a result of such an attempt at prediction. Such a choice seems to be impossible, since choice is made in view of a goal, and if you do not have one yet, how can you choose? But there is a pre-existing goal here on the level of the brain: it wants to know what it is going to do. And choosing a goal will serve that pre-existing goal. Once you choose a goal, it will then be easy to know what you are going to do: you are going to do things that promote the goal that you chose. In a similar way, following any desire will improve your brain’s guesses about what you are going to do. It follows that if you have a desire to believe something, actually believing it will improve your brain’s accuracy at least about what it is going to do. This is true but not a fair argument, however, since my proposal is that the brain’s guess of improved accuracy is the cause of your desire to believe something. It is true that if you already have the desire, giving in to it will improve accuracy, as with any desire. But in my theory the improved accuracy had to be implied first, in order to cause the desire.

The answer is that you have many desires for things other than belief, which at the same time give you a motive (not an argument) for believing things. And your brain understands that if you believe the thing, you will be more likely to act on those other desires, and this will minimize uncertainty, and improve the accuracy of its predictions. Consider this discussion of truth in religion. I pointed out there that people confuse two different questions: “what should I do?”, and “what is the world like?” In particular with religious and political loyalties, there can be an intense social pressure towards conformity. And this gives an obvious non-truth related motive to believe the things in question. But in a less obvious way, it means that your brain’s predictions will be more accurate if you believe the thing. Consider the Mormon, and take for granted that the religious doctrines in question are false. Since they are false, does not that mean that if they continue to believe, their predictions will be less accurate?

No, it does not, for several reasons. In the first place the doctrines are in general formulated to avoid such false predictions, at least about everyday life. There might be a false prediction about what will happen when you die, but that is in the future and is anyway disconnected from your everyday life. This is in part why I said “the predictions that are important to it” in my proposal. Second, failure to believe would lead to extremely serious conflicting desires: the person would still have the desire to conform outwardly, but would also have good logical reasons to avoid conformity. And since we don’t know in advance how we will respond to conflicting desires, the brain will not have a good idea of what it would do in that situation. In other words, the Mormon is living a good Mormon life. And their brain is aware that insisting that Mormonism is true is a very good way to make sure that they keep living that life, and therefore continue to behave predictably, rather than falling into a situation of strongly conflicting desires where it would have little idea of what it would do. In this sense, insisting that Mormonism is true, even though it is not, actually improves the brain’s predictive accuracy.

And so, one day, Mary’s captors decided it was time for her to see colors. As a trick, they prepared a bright blue banana to present as her first color experience ever. Mary took one look at it and said “Hey! You tried to trick me! Bananas are yellow, but this one is blue!” Her captors were dumfounded. How did she do it? “Simple,” she replied. “You have to remember that I know everything—absolutely everything—that could ever be known about the physical causes and effects of color vision. So of course before you brought the banana in, I had already written down, in exquisite detail, exactly what physical impression a yellow object or a blue object (or a green object, etc.) would make on my nervous system. So I already knew exactly what thoughts I would have (because, after all, the “mere disposition” to think about this or that is not one of your famous qualia, is it?). I was not in the slightest surprised by my experience of blue (what surprised me was that you would try such a second-rate trick on me). I realize it is hard for you to imagine that I could know so much about my reactive dispositions that the way blue affected me came as no surprise. Of course it’s hard for you to imagine. It’s hard for anyone to imagine the consequences of someone knowing absolutely everything physical about anything!”

I don’t intend to fully analyze this scenario here, and for that reason I left it to the reader in the previous post. However, I will make two remarks, one on what is right (or possibly right) about this continuation, and one on what might be wrong about this continuation.

The basically right or possibly right element is that if we assume that Mary knows all there is to know about color, including in its subjective aspect, it is reasonable to believe (even if not demonstrable) that she will be able to recognize the colors the first time she sees them. To gesture vaguely in this direction, we might consider that the color red can be somewhat agitating, while green and blue can be somewhat calming. These are not metaphorical associations, but actual emotional effects that they can have. Thus, if someone can recognize how their experience is affecting their emotions, it would be possible for them to say, “this seems more like the effect I would expect of green or blue, rather than red.” Obviously, this is not proving anything. But then, we do not in fact know what it is like to know everything there is to know about anything. As Dennett continues:

Surely I’ve cheated, you think. I must be hiding some impossibility behind the veil of Mary’s remarks. Can you prove it? My point is not that my way of telling the rest of the story proves that Mary doesn’t learn anything, but that the usual way of imagining the story doesn’t prove that she does. It doesn’t prove anything; it simply pumps the intuition that she does (“it seems just obvious”) by lulling you into imagining something other than what the premises require.

It is of course true that in any realistic, readily imaginable version of the story, Mary would come to learn something, but in any realistic, readily imaginable version she might know a lot, but she would not know everything physical. Simply imagining that Mary knows a lot, and leaving it at that, is not a good way to figure out the implications of her having “all the physical information”—any more than imagining she is filthy rich would be a good way to figure out the implications of the hypothesis that she owned everything.

By saying that the usual way of imagining the story “simply pumps the intuition,” Dennett is neglecting to point out what is true about the usual way of imagining the situation, and in that way he makes his own account seem less convincing. If Mary knows in advance all there is to know about color, then of course if she is asked afterwards, “do you know anything new about color?”, she will say no. But if we simply ask, “Is there anything new here?”, she will say, “Yes, I had a new experience which I never had before. But intellectually I already knew all there was to know about that experience, so I have nothing new to say about it. Still, the experience as such was new.” We are making the same point here as in the last post. Knowing a sensible experience intellectually is not to know in the mode of sense knowledge, but in the mode of intellectual knowledge. So if one then engages in sense knowledge, there will be a new mode of knowing, but not a new thing known. Dennett’s account would be clearer and more convincing if he simply agreed that Mary will indeed acknowledge something new; just not new knowledge.

In relation to what I said might be wrong about the continuation, we might ask what Dennett intended to do in using the word “physical” repeatedly throughout this account, including in phrases like “know everything physical” and “all the physical information.” In my explanation of the continuation, I simply assume that Mary understands all that can be understood about color. Dennett seems to want some sort of limitation to the “physical information” that can be understood about color. But either this is a real limitation, excluding some sorts of claims about color, or it is no limitation at all. If it is not a limitation, then we can simply say that Mary understands everything there is to know about color. If it is a real limitation, then the continuation will almost certainly fail.

I suspect that the real issue here, for Dennett, is the suggestion of some sort of reductionism. But reductionism to what? If Mary is allowed to believe things like, “Most yellows typically look brighter than most blue things,” then the limit is irrelevant, and Mary is allowed to know anything that people usually know about colors. But if the meaning is that Mary knows this only in a mathematical sense, that is, that she can have beliefs about certain mathematical properties of light and surfaces, rather than beliefs that are explicitly about blue and yellow things, then it will be a real limitation, and this limitation would cause his continuation to fail. We have basically the same issue here that I discussed in relation to Robin Hanson on consciousness earlier. If all of Mary’s statements are mathematical statements, then of course she will not know everything that people know about color. “Blue is not yellow” is not a mathematical statement, and it is something that we know about color. So we already know from the beginning that not all the knowledge that can be had about color is mathematical. Dennett might want to insist that it is “physical,” and surely blue and yellow are properties of physical things. If that is all he intends to say, namely that the properties she knows are properties of physical things, there is no problem here, but it does look like he intends to push further, to the point of possibly asserting something that would be evidently false.

The weird thing is, I know all of this. I know that if a community is big enough to include even a few liars, then absent a strong mechanism to stop them those lies should rise to the top. I know that pretty much all of our modern communities are super-Dunbar sized and ought to follow that principle.

And yet my System 1 still refuses to believe that the people in those Reddit threads are liars. It’s actually kind of horrified at the thought, imagining them as their shoulders slump and they glumly say “Well, I guess I didn’t really expect anyone to believe me”. I want to say “No! I believe you! I know you had a weird experience and it must be hard for you, but these things happen, I’m sure you’re a good person!”

If you’re like me, and you want to respond to this post with “but how do you know that person didn’t just experience a certain coincidence or weird psychological trick?”, then before you comment take a second to ask why the “they’re lying” theory is so hard to believe. And when you figure it out, tell me, because I really want to know.

The strongest reason for this effect is almost certainly a moral reason. In an earlier post, I discussed St. Thomas’s explanation for why one should give a charitable interpretation to someone’s behavior, and in a follow up, I explained the problem of applying that reasoning to the situation of judging whether a person is lying or not. St. Thomas assumes that the bad consequences of being mistaken about someone’s moral character will be minor, and most of the time this is true. But if we asking the question, “are they telling the truth or are they lying?”, the consequences can sometimes be very serious if we are mistaken.

Whether or not one is correct in making this application, it is not hard to see that this is the principal answer to Scott’s question. It is hard to believe the “they’re lying” theory not because of the probability that they are lying, but because we are unwilling to risk injuring someone with our opinion. This is without doubt a good motive from a moral standpoint.

I shuffled a deck of cards and got the same order that I started with.

No I am not kidding and its not because I can’t shuffle.

Let me just tell the story of how it happened. I was on a trip to Europe and I bought a pack of playing cards at the airport in Madrid to entertain myself on the flight back to Dallas.

It was about halfway through the flight after I’d watched Pixels twice in a row (That s literally the only reason I even remembered this) And I opened my brand new Real Madrid Playing Cards and I just shuffled them for probably like 30 minutes doing different tricks that I’d learned at school to entertain myself and the little girl sitting next to me also found them to be quite cool.

I then went to look at the other sides of the cards since they all had a picture of the Real Madrid player with the same number on the back. That’s when I realized that they were all in order. I literally flipped through the cards and saw Nacho-Fernandes, Ronaldo, Toni Kroos, Karim Benzema and the rest of the team go by all in the perfect order.

Then a few weeks ago when we randomly started talking about Pixels in AP Statistics I brought up this story and my teacher was absolutely amazed. We did the math and the amount of possibilities when shuffling a deck of cards is 52! Meaning 52 x 51 x 50 x 49 x 48….

There were 8.0658175e+67 different combinations of cards that I could have gotten. And I managed to get the same one twice.

The lack of context here might make us more willing to say that Arman Razaali is lying, compared to Scott’s particular examples. Nonetheless, I think a normal person will feel somewhat unwilling to say, “he’s lying, end of story.” I certainly feel that myself.

It does not take many shuffles to essentially randomize a deck. Consequently if Razaali’s statement that he “shuffled them for probably like 30 minutes” is even approximately true, 1 in 52! is probably a good estimate of the chance of the outcome that he claims, if we assume that it happened by chance. It might be some orders of magnitude less since there might be some possibility of “unshuffling.” I do not know enough about the physical process of shuffling to know whether this is a real possibility or not, but it is not likely to make a significant difference: e.g. the difference between 10^67 and 10^40 would be a huge difference mathematically, but it would not be significant for our considerations here, because both are simply too large for us to grasp.

People demonstrably lie at far higher rates than 1 in 10^67 or 1 in 10^40. This will remain the case even if you ask about the rate of “apparently unmotivated flat out lying for no reason.” Consequently, “he’s lying, period,” is far more likely than “the story is true, and happened by pure chance.” Nor can we fix this by pointing to the fact that an extraordinary claim is a kind of extraordinary evidence. In the linked post I said that the case of seeing ghosts, and similar things, might be unclear:

Or in other words, is claiming to have seen a ghost more like claiming to have picked 422,819,208, or is it more like claiming to have picked 500,000,000?

That remains undetermined, at least by the considerations which we have given here. But unless you have good reasons to suspect that seeing ghosts is significantly more rare than claiming to see a ghost, it is misguided to dismiss such claims as requiring some special evidence apart from the claim itself.

In this case there is no such unclarity – if we interpret the claim as “by pure chance the deck ended up in its original order,” then it is precisely like claiming to have picked 500,000,000, except that it is far less likely.

Note that there is some remaining ambiguity. Razaali could defend himself by saying, “I said it happened, I didn’t say it happened by chance.” Or in other words, “but how do you know that person didn’t just experience a certain coincidence or weird psychological trick?” But this is simply to point out that “he’s lying” and “this happened by pure chance” are not exhaustive alternatives. And this is true. But if we want to estimate the likelihood of those two alternatives in particular, we must say that it is far more likely that he is lying than that it happened, and happened by chance. And so much so that if one of these alternatives is true, it is virtually certain that he is lying.

As I have said above, the inclination to doubt that such a person is lying primarily has a moral reason. This might lead someone to say that my estimation here also has a moral reason: I just want to form my beliefs in the “correct” way, they might say: it is not about whether Razaali’s story really happened or not.

Charles Taylor, in chapter 15 of A Secular Age, gives a similar explanation of the situation of former religious believers who apparently have lost their faith due to evidence and argument:

From the believer’s perspective, all this falls out rather differently. We start with an epistemic response: the argument from modern science to all-around materialism seems quite unconvincing. Whenever this is worked out in something closer to detail, it seems full of holes. The best examples today might be evolution, sociobiology, and the like. But we also see reasonings of this kind in the works of Richard Dawkins, for instance, or Daniel Dennett.

So the believer returns the compliment. He casts about for an explanation why the materialist is so eager to believe very inconclusive arguments. Here the moral outlook just mentioned comes back in, but in a different role. Not that, failure to rise to which makes you unable to face the facts of materialism; but rather that, whose moral attraction, and seeming plausibility to the facts of the human moral condition, draw you to it, so that you readily grant the materialist argument from science its various leaps of faith. The whole package seems plausible, so we don’t pick too closely at the details.

But how can this be? Surely, the whole package is meant to be plausible precisely because science has shown . . . etc. That’s certainly the way the package of epistemic and moral views presents itself to those who accept it; that’s the official story, as it were. But the supposition here is that the official story isn’t the real one; that the real power that the package has to attract and convince lies in it as a definition of our ethical predicament, in particular, as beings capable of forming beliefs.

This means that this ideal of the courageous acknowledger of unpalatable truths, ready to eschew all easy comfort and consolation, and who by the same token becomes capable of grasping and controlling the world, sits well with us, draws us, that we feel tempted to make it our own. And/or it means that the counter-ideals of belief, devotion, piety, can all-too-easily seem actuated by a still immature desire for consolation, meaning, extra-human sustenance.

What seems to accredit the view of the package as epistemically-driven are all the famous conversion stories, starting with post-Darwinian Victorians but continuing to our day, where people who had a strong faith early in life found that they had reluctantly, even with anguish of soul, to relinquish it, because “Darwin has refuted the Bible”. Surely, we want to say, these people in a sense preferred the Christian outlook morally, but had to bow, with whatever degree of inner pain, to the facts.

But that’s exactly what I’m resisting saying. What happened here was not that a moral outlook bowed to brute facts. Rather we might say that one moral outlook gave way to another. Another model of what was higher triumphed. And much was going for this model: images of power, of untrammelled agency, of spiritual self-possession (the “buffered self”). On the other side, one’s childhood faith had perhaps in many respects remained childish; it was all too easy to come to see it as essentially and constitutionally so.

But this recession of one moral ideal in face of the other is only one aspect of the story. The crucial judgment is an all-in one about the nature of the human ethical predicament: the new moral outlook, the “ethics of belief” in Clifford’s famous phrase, that one should only give credence to what was clearly demonstrated by the evidence, was not only attractive in itself; it also carried with it a view of our ethical predicament, namely, that we are strongly tempted, the more so, the less mature we are, to deviate from this austere principle, and give assent to comforting untruths. The convert to the new ethics has learned to mistrust some of his own deepest instincts, and in particular those which draw him to religious belief. The really operative conversion here was based on the plausibility of this understanding of our ethical situation over the Christian one with its characteristic picture of what entices us to sin and apostasy. The crucial change is in the status accorded to the inclination to believe; this is the object of a radical shift in interpretation. It is no longer the impetus in us towards truth, but has become rather the most dangerous temptation to sin against the austere principles of belief-formation. This whole construal of our ethical predicament becomes more plausible. The attraction of the new moral ideal is only part of this, albeit an important one. What was also crucial was a changed reading of our own motivation, wherein the desire to believe appears now as childish temptation. Since all incipient faith is childish in an obvious sense, and (in the Christian case) only evolves beyond this by being child-like in the Gospel sense, this (mis)reading is not difficult to make.

Taylor’s argument is that the arguments for unbelief are unconvincing; consequently, in order to explain why unbelievers find them convincing, he must find some moral explanation for why they do not believe. This turns out to be the desire to have a particular “ethics of belief”: they do not want to have beliefs which are not formed in such and such a particular way. This is much like the theoretical response above regarding my estimation of the probability that Razaali is lying, and how that might be considered a moral estimation, rather than being concerned with what actually happened.

There are a number of problems with Taylor’s argument, which I may or may not address in the future in more detail. For the moment I will take note of three things:

First, neither in this passage nor elsewhere in the book does Taylor explain in any detailed way why he finds the unbeliever’s arguments unconvincing. I find the arguments convincing, and it is the rebuttals (by others, not by Taylor, since he does not attempt this) that I find unconvincing. Now of course Taylor will say this is because of my particular ethical motivations, but I disagree, and I have considered the matter exactly in the kind of detail to which he refers when he says, “Whenever this is worked out in something closer to detail, it seems full of holes.” On the contrary, the problem of detail is mostly on the other side; most religious views can only make sense when they are not worked out in detail. But this is a topic for another time.

Second, Taylor sets up an implicit dichotomy between his own religious views and “all-around materialism.” But these two claims do not come remotely close to exhausting the possibilities. This is much like forcing someone to choose between “he’s lying” and “this happened by pure chance.” It is obvious in both cases (the deck of cards and religious belief) that the options do not exhaust the possibilities. So insisting on one of them is likely motivated itself: Taylor insists on this dichotomy to make his religious beliefs seem more plausible, using a presumed implausibility of “all-around materialism,” and my hypothetical interlocutor insists on the dichotomy in the hope of persuading me that the deck might have or did randomly end up in its original order, using my presumed unwillingness to accuse someone of lying.

Third, Taylor is not entirely wrong that such an ethical motivation is likely involved in the case of religious belief and unbelief, nor would my hypothetical interlocutor be entirely wrong that such motivations are relevant to our beliefs about the deck of cards.

But we need to consider this point more carefully. Insofar as beliefs are voluntary, you cannot make one side voluntary and the other side involuntary. You cannot say, “Your beliefs are voluntarily adopted due to moral reasons, while my beliefs are imposed on my intellect by the nature of things.” If accepting an opinion is voluntary, rejecting it will also be voluntary, and if rejecting it is voluntary, accepting it will also be voluntary. In this sense, it is quite correct that ethical motivations will always be involved, even when a person’s opinion is actually true, and even when all the reasons that make it likely are fully known. To this degree, I agree that I want to form my beliefs in a way which is prudent and reasonable, and I agree that this desire is partly responsible for my beliefs about religion, and for my above estimate of the chance that Razaali is lying.

But that is not all: my interlocutor (Taylor or the hypothetical one) is also implicitly or explicitly concluding that fundamentally the question is not about truth. Basically, they say, I want to have “correctly formed” beliefs, but this has nothing to do with the real truth of the matter. Sure, I might feel forced to believe that Razaali’s story isn’t true, but there really is no reason it couldn’t be true. And likewise I might feel forced to believe that Taylor’s religious beliefs are untrue, but there really is no reason they couldn’t be.

And in this respect they are mistaken, not because anything “couldn’t” be true, but because the issue of truth is central, much more so than forming beliefs in an ethical way. Regardless of your ethical motives, if you believe that Razaali’s story is true and happened by pure chance, it is virtually certain that you believe a falsehood. Maybe you are forming this belief in a virtuous way, and maybe you are forming it in a vicious way: but either way, it is utterly false. Either it in fact did not happen, or it in fact did not happen by chance.

We know this, essentially, from the “statistics” of the situation: no matter how many qualifications we add, lies in such situations will be vastly more common than truths. But note that something still seems “unconvincing” here, in the sense of Scott Alexander’s original post: even after “knowing all this,” he finds himself very unwilling to say they are lying. In a discussion with Angra Mainyu, I remarked that our apparently involuntary assessments of things are more like desires than like beliefs:

So rather than calling that assessment a belief, it would be more accurate to call it a desire. It is not believing something, but desiring to believe something. Hunger is the tendency to go and get food; that assessment is the tendency to treat a certain claim (“the USA is larger than Austria”) as a fact. And in both cases there are good reasons for those desires: you are benefited by food, and you are benefited by treating that claim as a fact.

In a similar way, because we have the natural desire not to injure people, we will naturally desire not to treat “he is lying” as a fact; that is, we will desire not to believe it. The conclusion that Angra should draw in the case under discussion, according to his position, is that I do not “really believe” that it is more likely that Razaali is lying than that his story is true, because I do feel the force of the desire not to say that he is lying. But I resist that desire, in part because I want to have reasonable beliefs, but most of all because it is false that Razaali’s story is true and happened by chance.

To the degree that this desire feels like a prior probability, and it does feel that way, it is necessarily miscalibrated. But to the degree that this desire remains nonetheless, this reasoning will continue to feel in some sense unconvincing. And it does in fact feel that way to me, even after making the argument, as expected. Very possibly, this is not unrelated to Taylor’s assessment that the argument for unbelief “seems quite unconvincing.” But discussing that in the detail which Taylor omitted is a task for another time.

In a sort of curious coincidence, a few days after I published mylastfewposts, Scott Alexander posted a book review of Andy Clark’s book Surfing Uncertainty. A major theme of my posts was that in a certain sense, a decision consists in the expectation of performing the action decided upon. In a similar way, Andy Clark claims that the human brain does something very similar from moment to moment. Thus he begins chapter 4 of his book:

To surf the waves of sensory stimulation, predicting the present is simply not enough. Instead, we are built to engage the world. We are built to act in ways that are sensitive to the contingencies of the past, and that actively bring forth the futures that we need and desire. How does a guessing engine (a hierarchical prediction machine) turn prediction into accomplishment? The answer that we shall explore is: by predicting the shape of its own motor trajectories. In accounting for action, we thus move from predicting the rolling present to predicting the near-future, in the form of the not-yet-actual trajectories of our own limbs and bodies. These trajectories, predictive processing suggests, are specified by their distinctive sensory (especially proprioceptive) consequences. In ways that we are about to explore, predicting these (non-actual) sensory states actually serves to bring them about.

Such predictions act as self-fulfilling prophecies. Expecting the flow of sensation that would result were you to move your body so as to keep the surfboard in that rolling sweet spot results (if you happen to be an expert surfer) in that very flow, locating the surfboard right where you want it. Expert prediction of the world (here, the dynamic ever-changing waves) combines with expert prediction of the sensory flow that would, in that context, characterize the desired action, so as to bring that action about.

There is a great deal that could be said about the book, and about this theory, but for the moment I will content myself with remarking on one of Scott Alexander’s complaints about the book, and making one additional point. In his review, Scott remarks:

In particular, he’s obsessed with showing how “embodied” everything is all the time. This gets kind of awkward, since the predictive processing model isn’t really a natural match for embodiment theory, and describes a brain which is pretty embodied in some ways but not-so-embodied in others. If you want a hundred pages of apologia along the lines of “this may not look embodied, but if you squint you’ll see how super-duper embodied it really is!”, this is your book.

I did not find Clark obsessed with this, and I think it would be hard to reasonably describe any hundred pages in the book as devoted to this particular topic. This inclines to me to suggest that Scott may be irritated by such discussion of the topic that comes up because it does not seem relevant to him. I will therefore explain the relevance, namely in relation to a different difficulty which Scott discusses in another post:

There’s something more interesting in Section 7.10 of Surfing Uncertainty [actually 8.10], “Escape From The Darkened Room”. It asks: if the brain works to minimize prediction error, isn’t its best strategy to sit in a dark room and do nothing forever? After all, then it can predict its sense-data pretty much perfectly – it’ll always just stay “darkened room”.

Section 7.10 [8.10] gives a kind of hand-wave-y answer here, saying that of course organisms have some drives, and probably it makes sense for them to desire novelty and explore new options, and so on. Overall this isn’t too different from PCT’s idea of “intrinsic error”, and as long as we remember that it’s not really predicting anything in particular it seems like a fair response.

Clark’s response may be somewhat “hand-wave-y,” but I think the response might seem slightly more problematic to Scott than it actually is, precisely because he does not understand the idea of embodiment, and how it applies to this situation.

If we think about predictions on a general intellectual level, there is a good reason not to predict that you will not eat something soon. If you do predict this, you will turn out to be wrong, as is often discovered by would-be adopters of extreme fasts or diets. You will in fact eat something soon, regardless of what you think about this; so if you want the truth, you should believe that you will eat something soon.

The “darkened room” problem, however, is not about this general level. The argument is that if the brain is predicting its actions from moment to moment on a subconscious level, then if its main concern is getting accurate predictions, it could just predict an absence of action, and carry this out, and its predictions would be accurate. So why does this not happen? Clark gives his “hand-wave-y” answer:

Prediction-error-based neural processing is, we have seen, part of a potent recipe for multi-scale self-organization. Such multiscale self-organization does not occur in a vacuum. Instead, it operates only against the backdrop of an evolved organismic (neural and gross-bodily) form, and (as we will see in chapter 9) an equally transformative backdrop of slowly accumulated material structure and cultural practices: the socio-technological legacy of generation upon generation of human learning and experience.

To start to bring this larger picture into focus, the first point to notice is that explicit, fast timescale processes of prediction error minimization must answer to the needs and projects of evolved, embodied, and environmentally embedded agents. The very existence of such agents (see Friston, 2011b, 2012c) thus already implies a huge range of structurally implicit creature-specific ‘expectations’. Such creatures are built to seek mates, to avoid hunger and thirst, and to engage (even when not hungry and thirsty) in the kinds of sporadic environmental exploration that will help prepare them for unexpected environmental shifts, resource scarcities, new competitors, and so on. On a moment-by-moment basis, then, prediction error is minimized only against the backdrop of this complex set of creature-defining ‘expectations’.”

In one way, the answer here is a historical one. If you simply ask the abstract question, “would it minimize prediction error to predict doing nothing, and then to do nothing,” perhaps it would. But evolution could not bring such a creature into existence, while it was able to produce a creature that would predict that it would engage the world in various ways, and then would proceed to engage the world in those ways.

The objection, of course, would not be that the creature of the “darkened room” is possible. The objection would be that since such a creature is not possible, it must be wrong to describe the brain as minimizing prediction error. But notice that if you predict that you will not eat, and then you do not eat, you are no more right or wrong than if you predict that you will eat, and then you do eat. Either one is possible from the standpoint of prediction, but only one is possible from the standpoint of history.

This is where being “embodied” is relevant. The brain is not an abstract algorithm which has no content except to minimize prediction error; it is a physical object which works together in physical ways with the rest of the human body to carry out specifically human actions and to live a human life.

On the largest scale of evolutionary history, there were surely organisms that were nourished and reproduced long before there was anything analagous to a mind at work in those organisms. So when mind began to be, and took over some of this process, this could only happen in such a way that it would continue the work that was already there. A “predictive engine” could only begin to be by predicting that nourishment and reproduction would continue, since any attempt to do otherwise would necessarily result either in false predictions or in death.

This response is necessarily “hand-wave-y” in the sense that I (and presumably Clark) do not understand the precise physical implementation. But it is easy to see that it was historically necessary for things to happen this way, and it is an expression of “embodiment” in the sense that “minimize prediction error” is an abstract algorithm which does not and cannot exhaust everything which is there. The objection would be, “then there must be some other algorithm instead.” But this does not follow: no abstract algorithm will exhaust a physical object. Thus for example, animals will fall because they are heavy. Asking whether falling will satisfy some abstract algorithm is not relevant. In a similar way, animals had to be physically arranged in such a way that they would usually eat and reproduce.

I said I would make one additional point, although it may well be related to the above concern. In section 4.8 Clark notes that his account does not need to consider costs and benefits, at least directly:

But the story does not stop there. For the very same strategy here applies to the notion of desired consequences and rewards at all levels. Thus we read that ‘crucially, active inference does not invoke any “desired consequences”. It rests only on experience-dependent learning and inference: experience induces prior expectations, which guide perceptual inference and action’ (Friston, Mattout, & Kilner, 2011, p. 157). Apart from a certain efflorescence of corollary discharge, in the form of downward-flowing predictions, we here seem to confront something of a desert landscape: a world in which value functions, costs, reward signals, and perhaps even desires have been replaced by complex interacting expectations that inform perception and entrain action. But we could equally say (and I think this is the better way to express the point) that the functions of rewards and cost functions are now simply absorbed into a more complex generative model. They are implicit in our sensory (especially proprioceptive) expectations and they constrain behavior by prescribing their distinctive sensory implications.

The idea of the “desert landscape” seems to be that this account appears to do away with the idea of the good, and the idea of desire. The brain predicts what it is going to do, and those predictions cause it to do those things. This all seems purely intellectual: it seems that there is no purpose or goal or good involved.

The correct response to this, I think, is connected to what I have said elsewhere about desire and good. I noted there that we recognize our desires as desires for particular things by noticing that when we have certain feelings, we tend to do certain things. If we did not do those things, we would never conclude that those feelings are desires for doing those things. Note that someone could raise a similar objection here: if this is true, then are not desire and good mere words? We feel certain feelings, and do certain things, and that is all there is to be said. Where is good or purpose here?

The truth here is that good and being are convertible. The objection (to my definition and to Clark’s account) is not a reasonable objection at all: it would be a reasonable objection only if we expected good to be something different from being, in which case it would of course be nothing at all.

What if self-deception helps us be happy? What if just running out and overcoming bias will make us—gasp!—unhappy? Surely, true wisdom would be second-order rationality, choosing when to be rational. That way you can decide which cognitive biases should govern you, to maximize your happiness.

Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen.

Second-order rationality implies that at some point, you will think to yourself, “And now, I will irrationally believe that I will win the lottery, in order to make myself happy.” But we do not have such direct control over our beliefs. You cannot make yourself believe the sky is green by an act of will. You might be able to believe you believed it—though I have just made that more difficult for you by pointing out the difference. (You’re welcome!) You might even believe you were happy and self-deceived; but you would not in fact be happy and self-deceived.

For second-order rationality to be genuinely rational, you would first need a good model of reality, to extrapolate the consequences of rationality and irrationality. If you then chose to be first-order irrational, you would need to forget this accurate view. And then forget the act of forgetting. I don’t mean to commit the logical fallacy of generalizing from fictional evidence, but I think Orwell did a good job of extrapolating where this path leads.

You can’t know the consequences of being biased, until you have already debiased yourself. And then it is too late for self-deception.

The other alternative is to choose blindly to remain biased, without any clear idea of the consequences. This is not second-order rationality. It is willful stupidity.

There are several errors here. The first is the denial that belief is voluntary. As I remarked in the comments to this post, it is best to think of “choosing to believe a thing” as “choosing to treat this thing as a fact.” And this is something which is indeed voluntary. Thus for example it is by choice that I am, at this very moment, treating it as a fact that belief is voluntary.

There is some truth in Yudkowsky’s remark that “you cannot make yourself believe the sky is green by an act of will.” But this is not because the thing itself is intrinsically involuntary. On the contrary, you could, if you wished, choose to treat the greenness of the sky as a fact, at least for the most part and in most ways. The problem is that you have no good motive to wish to act this way, and plenty of good motives not to act this way. In this sense, it is impossible for most of us to believe that the sky is green in the same way it is impossible for most of us to commit suicide; we simply have no good motive to do either of these things.

Yudkowsky’s second error is connected with the first. Since, according to him, it is impossible to deliberately and directly deceive oneself, self-deception can only happen in an indirect manner: “The other alternative is to choose blindly to remain biased, without any clear idea of the consequences. This is not second-order rationality. It is willful stupidity.” The idea is that ordinary beliefs are simply involuntary, but we can have beliefs that are somewhat voluntary by choosing “blindly to remain biased, without any clear idea of the consequences.” Since this is “willful stupidity,” a reasonable person would completely avoid such behavior, and thus all of his beliefs would be involuntary.

Essentially, Yudkowsky is claiming that we have some involuntary beliefs, and that we should avoid adding any voluntary beliefs to our involuntary ones. This view is fundamentally flawed precisely because all of our beliefs are voluntary, and thus we cannot avoid having voluntary beliefs.

Nor is it “willful stupidity” to trade away some truth for the sake of other good things. Completely avoiding this is in fact intrinsically impossible. If you are seeking one good, you are not equally seeking a distinct good; one cannot serve two masters. Thus since all people are interested in some goods distinct from truth, there is no one who fails to trade away some truth for the sake of other things. Yudkowsky’s mistake here is related to his wishful thinking about wishful thinking which I discussed previously. In this way he views himself, at least ideally, as completely avoiding wishful thinking. This is both impossible and unhelpful, impossible in that everyone has such motivated beliefs, and unhelpful because such beliefs can in fact be beneficial.

Once we have a clear view of this matter, we can use this to minimize the loss of truth that results from such beliefs. For example, in a post linked above, we discussed the argument that fictional accounts consistently distort one’s beliefs about reality. Rather than pretending that there is no such effect, we can deliberately consider to what extent we wish to be open to this possibility, depending on our other purposes for engaging with such accounts. This is not “willful stupidity”; the stupidity would to be engage in such trades without realizing that such trades are inevitable, and thus not to realize to what extent you are doing it.

Consider one of the cases of voluntary belief discussed in this earlier post. As we quoted at the time, Eric Reitan remarks:

For most horror victims, the sense that their lives have positive meaning may depend on the conviction that a transcendent good is at work redeeming evil. Is the evidential case against the existence of such a good really so convincing that it warrants saying to these horror victims, “Give up hope”? Should we call them irrational when they cling to that hope or when those among the privileged live in that hope for the sake of the afflicted? What does moral decency imply about the legitimacy of insisting, as the new atheists do, that any view of life which embraces the ethico-religious hope should be expunged from the world?

Here, Reitan is proposing that someone believe that “a transcendent good is at work redeeming evil” for the purpose of having “the sense that their lives have positive meaning.” If we look at this as it is, namely as proposing a voluntary belief for the sake of something other than truth, we can find ways to minimize the potential conflict between accuracy and this other goal. For example, the person might simply believe that “my life has a positive meaning,” without trying to explain why this is so. For the reasons given here, “my life has a positive meaning” is necessarily more probable and more known than any explanation for this that might be adopted. To pick a particular explanation and claim that it is more likely would be to fall into the conjunction fallacy.

Of course, real life is unfortunately more complicated. The woman in Reitan’s discussion might well respond to our proposal somewhat in this way (not a real quotation):

Probability is not the issue here, precisely because it is not a question of the truth of the matter in itself. There is a need to actually feel that one’s life is meaningful, not just to believe it. And the simple statement “life is meaningful” will not provide that feeling. Without the feeling, it will also be almost impossible to continue to believe it, no matter what the probability is. So in order to achieve this goal, it is necessary to believe a stronger and more particular claim.

And this response might be correct. Some such goals, due to their complexity, might not be easily achieved without adopting rather unlikely beliefs. For example, Robin Hanson, while discussing his reasons for having opinions, several times mentions the desire for “interesting” opinions. This is a case where many people will not even notice the trade involved, because the desire for interesting ideas seems closely related to the desire for truth. But in fact truth and interestingness are diverse things, and the goals are diverse, and one who desires both will likely engage in some trade. In fact, relative to truth seeking, looking for interesting things is a dangerous endeavor. Scott Alexander notes that interesting things are usually false:

This suggests a more general principle: interesting things should usually be lies. Let me give three examples.

I wrote in Toxoplasma of Rage about how even when people crusade against real evils, the particular stories they focus on tend to be false disproportionately often. Why? Because the thousands of true stories all have some subtleties or complicating factors, whereas liars are free to make up things which exactly perfectly fit the narrative. Given thousands of stories to choose from, the ones that bubble to the top will probably be the lies, just like on Reddit.

Every time I do a links post, even when I am very careful to double- and triple- check everything, and to only link to trustworthy sources in the mainstream media, a couple of my links end up being wrong. I’m selecting for surprising-if-true stories, but there’s only one way to get surprising-if-true stories that isn’t surprising, and given an entire Internet to choose from, many of the stories involved will be false.

And then there’s bad science. I can’t remember where I first saw this, so I can’t give credit, but somebody argued that the problem with non-replicable science isn’t just publication bias or p-hacking. It’s that some people will be sloppy, biased, or just stumble through bad luck upon a seemingly-good methodology that actually produces lots of false positives, and that almost all interesting results will come from these people. They’re the equivalent of Reddit liars – if there are enough of them, then all of the top comments will be theirs, since they’re able to come up with much more interesting stuff than the truth-tellers. In fields where sloppiness is easy, the truth-tellers will be gradually driven out, appearing to be incompetent since they can’t even replicate the most basic findings of the field, let alone advance it in any way. The sloppy people will survive to train the next generation of PhD students, and you’ll end up with a stable equilibrium.

In a way this makes the goal of believing interesting things much like the woman’s case. The goal of “believing interesting things” will be better achieved by more complex and detailed beliefs, even though to the extent that they are more complex and detailed, they are simply that much less likely to be true.

The point of this present post, then, is not to deny that some goals might be such that they are better attained with rather unlikely beliefs, and in some cases even in proportion to the unlikelihood of the beliefs. Rather, the point is that a conscious awareness of the trades involved will allow a person to minimize the loss of truth involved. If you never look at your bank account, you will not notice how much money you are losing from that monthly debit for internet. In the same way, if you hold Yudkowksy’s opinion, and believe that you never trade away truth for other things, which is itself both false and motivated, you are like someone who never looks at your account: you will not notice how much you are losing.