Monthly Archives: March 2009

To learn to be more (epistemically) rational, i.e., to better discern and tell the truth, it would sure help to have good ways to regularly test ourselves. Eliezer and I have both requested folks to give thought to how we could better test our (epistemic) rationality.

It seems relatively easy to test someone's ability to make accurate and calibrated forecasts in novel contexts. Given them some info on a new topic, limited time and resources to make estimates, and then evaluate their accuracy. We might presume that any residual after controlling for IQ, info, effort, and related expertise was (epistemic) "rationality."

My main worry about this approach is it doesn't get at the fact that some topics test one's rationality more severely than others. It can be much harder to be honest when you care a lot about a topic, when others care about your opinion, or when you don't expect your opinions to be scored against reality anytime soon. How can we test rationality in these cases?

Over twenty years ago some psychologists worked out a twenty item questionnaire for evaluating how much people lie to themselves to look good, on topics important to them. (A related twenty question survey looks at lies to others.) They "validated" these "scales" by running a lot of tests comparing them to other scales.

Alas, I doubt that these tests would work as well if respondents knew that they were being tested for rationality. And surely couldn't tell someone their score and then give them the test again and expect it to be as informative. So these tests are a valuable but limited resource.

Which is why I haven't linked to them here in this post, yet. First I want folks to ponder: how best could we spend this limited resource to test our rationality?

Even our "direct" perceptions, such as of how much a box weighs, are greatly influenced by our expectations. From a recentNew Scientist:

Get hold of two cardboard boxes of different sizes and put a brick in each one. Check they weigh the same, then get somebody to lift them and tell you which is the heavier. The vast majority of people will say that the smaller box is heavier, even though it isn't, and will continue to maintain that it is even after looking inside both boxes and lifting them several times. … Curiously, experiments show that even though people initially use greater force to lift the larger box than the smaller one, on subsequent lifts they unconsciously equalise the amount of force they use to lift them. Despite their bodies apparently "knowing" that the boxes weigh the same, their minds still perceive the smaller box as being heavier. …

[Someone] showed that we can unlearn the size-weight illusion. He got volunteers to spend several days manipulating boxes that became lighter the larger they were. At the end of the process he found that their size-weight illusion was reversed. … This is good evidence that the illusion arises out of experience of the world, where larger objects tend to weigh more than smaller objects of the same kind.

This makes me more forgiving of people whose mistaken beliefs are contradicted by evidence right "before their eyes." Our eyes don't see nearly as much as we'd like.

It is a non-so-hidden agenda of this site, Less Wrong, that there are many causes which benefit from the spread of rationality – because it takes a little more rationality than usual to see their case, as a supporter, or even just a supportive bystander. Not just the obvious causes like atheism, but things like marijuana legalization … If the supporters of other causes are enlightened enough to think similarly…

Then all the causes which benefit from spreading rationality, can, perhaps, have something in the way of standardized material to which to point their supporters – a common task, centralized to save effort – and think of themselves as spreading a little rationality on the side. … Atheism has very little to do directly with marijuana legalization, but if both atheists and anti-Prohibitionists are willing to step back a bit and say a bit about the general, abstract principle of confronting a discomforting truth that interferes with a fine righteous tirade, then both atheism and marijuana legalization pick up some of the benefit from both efforts.

So is there a workable natural alliance between more-rational-than-average folks? Consider two related but unpopular alliances:

Extremists – People who hold extreme views seem to have a common cause in persuading others that central/conventional views are less reliable than they may seem; they agree outsiders deserve more chances to prove themselves without being dismissed just for holding extreme views.

Folks Who Think They'd Win Bets – People who think their views will eventually be vindicated seem to have a common cause in promoting the creation of, use of, and deference to betting markets; they expect market odds to discount opponents who deep down know their arguments are weak.

My experience is that such alliance members are seen as low status, making others reluctant to join them. Since on average crazy folks tend to be more attracted to extreme views than sane folks, most kinds of extremists try to distance themselves from other kinds. And since on average lowbrow folks find open betting markets and track records more engaging, most elite-aspiring intellectuals avoid open betting markets and forecast track records. I conclude that the proposed alliance of rational folks will only fly if can find a way for its members to be seen as high, not low, status.

Let me try an experiment: using a blog post to develop a taxonomy. Here I'll try to develop a list/taxonomy of (at least semi-coherent) answers to the question I posed yesterday: why is it harder to formally predict pasts, versus futures (from presents)? Mostly these are explanations of the "past hypothesis", but I'm trying to stay open-minded toward a wide range of explanations.

I'll start with a list of answers, and then add more and group them as I read comments, think, etc. I'll feel free to edit the post from here on:

Thermodynamics is the study of heat, temperature, pressure, and related phenomena. Physicists have long considered it the physics area least likely to be overturned by future discoveries, in part because they understand it so well via "statistical mechanics." Alas, not only are we far from understanding thermodynamics, the situation is much worse than most everyone (including me until now) admits! In this post I'll try to make this scandal clear.

For an analogy, consider the intelligent design question: did "God" or a "random" process cause life to appear? To compute Bayesian probabilities here, we must multiply the prior chance of each option by the likelihood of life appearing given that option, and then renormalize. So all else equal, the less likely that life would arise randomly, the more likely God caused life.

Imagine that while considering ways life might arise randomly, we had trouble finding any scenario wherein a universe (not just a local region) randomly produced life with substantial probability. Then imagine someone proposed this solution: a new law of nature saying "life was sure to appear even though it seems unlikely." Would this solve the problem? Not in my book.

We are now in pretty much in this same situation "explaining" half of thermodynamics. What we have now are standard distributions (i.e., probability measures) over possible states of physical systems, distributions which do very well at predicting future system states. That is, if we condition these distributions on what we know about current system states, and then apply local physics dynamics to system states, we get excellent predictions about future states. We predict heat flows, temperatures, pressures, fluctuations, engines, refineries, etc., all with great precision. This seems a spectacular success.

BUT, this same approach seems spectacularly wrong when applied to predicting past states of physical systems. It gets wrong heat flows, temperatures, and pretty much everything; not just a little, but a lot wrong. For example, we might think we know about the past via our memories and records, but this standard approach says our records are far more likely to result from random fluctuations than to actually be records of what we think they are.

A young colleague recently said he didn't want to end up like older folks he knew who didn't keep up with new music fashions. Some of us older folks suggested he probably would become like us, and he would probably like it. He was horrified.

People often wonder what it will be like for them to be old, or married, or with a successful career, etc. They usually conclude they just can't know, and must wait and see. Yet all around them are other folks who are old, married, etc. – why not just accept those experiences as a good predictions of such futures?

People usually respond that they are too different from these other folks for their experiences to be a good guide. A paper in the latest Science suggests otherwise:

Two experiments revealed that (i) people can more accurately predict their affective reactions to a future event when they know how a neighbor in their social network reacted to the event than when they know about the event itself and (ii) people do not believe this.

We mistakenly prefer an "inside" view, imagining how we'd respond to particular details, but in fact the "outside" view of others' reactions is more reliable.

This seems to me more than a simple cognitive error. It seems folks feel that they would not be motivated enough to exercise, marry, work, etc. if they thought their future was going to be much like the futures of others around them. Are they right? More from that paper:

We should realize that we gain far less info in an echo chamber than from being around folks with diverse views. The latestJournal of Experimental Psychology says we just don't get this:

The experimental task involved estimating the number of calories in measured quantities of different foods (e.g., a cup of yogurt, a bowl of cooked rice). … Participants were asked to generate a calorie estimate for each food and then indicate their confidence in it. … [Then] they were provided with the opinions of three advisors, and were given the opportunity to revise their initial estimates. They were told that they would receive a bonus for making accurate judgments, … [and] were also asked to indicate their confidence in their final (revised) estimates and to bet on their accuracy. …

On half the trials (independent condition) the [screen] header stated that “these estimates were randomly drawn from a pool of 100 estimates made by participants in a previous study,” whereas on the remaining trials (opinion-dependent condition) the header stated that “these estimates were selected from those closest to your own initial opinion in a pool of 100 estimates made by participants in a previous study.” …

Receiving advice increased participants’ confidence in the dependent condition, but not in the independent condition. Participants indicated greater confidence in their final estimates in the opinion-dependent than in the independent condition. In accord with the confidence results, the participants bet more often in the dependent (58%) than in the independent condition (42%).

Please, please, don't let yourself succumb to the very common bias to confidence in views because "everyone" at your favorite website agrees with them, if those people have been selected for this very agreement! Once you realize that many others elsewhere disagree, that disagreement should weigh heavily on your estimation.

Most academic papers are rejected by several journals before some journal finally accepts them. But a paper's rejection history is usually private; all readers know is where each paper was accepted.

Imagine a journal that published all its rejections, listing rejected authors, titles, and relevant dates. The possibility of embarrassment via appearing on such a list would "raise the bar" for authors, especially discouraging those who thought rejection more likely. It would also raise the bar for editors; readers could see how often rejected papers were accepted at equal or better journals, and potential authors could better evaluate their chances.

Since this signal of author and editor confidence would speak well of a journal, journals that did not publish rejections should look worse by comparison. Why then do no journals publish their rejections? Sporting contests publicly display losers; why not academic contests as well?

Added: most comments focus on the overall social effects; my puzzle is regarding individual incentives. People are usually eager to signal confidence in their abilities; why in this context do people avoid such signals?

Over the next two weeks my eldest son will be rejected by some colleges, accepted by others. And then we’ll likely have to make a hard choice, between cheap state schools and expensive prestigious ones (or online colleges). A colleague told me the best econ paper on this found it doesn’t matter. From its 1999 abstract:

We matched students who applied to, and were accepted by, similar colleges to try to eliminate this bias. Using the … High School Class of 1972, we find that students who attended more selective colleges earned about the same [20 years later] as students of seemingly comparable ability who attended less selective schools. Children from low-income families, however, earned more if they attended selective colleges.

Higher education experts have this message … Pay less attention to prestige and more to “fit” — the marriage of interests and comfort level with factors like campus size, access to professors, instruction philosophy. … A 1999 study by Alan B. Krueger … and Stacy Dale … found that students who were admitted to both selective and moderately selective colleges earned the same no matter which they attended.

The very fact that we have moral impulses to support the public good is necessarily intertwined with the fact that we have moral impulses to punish those who do not (and to punish those who do not punish those who do not, and so on) … This instinct is especially harmful when used to punish those who are perceived not punishing free riders. This is the source of the bigotry against market economics among the do-gooders: It is believed that those who describe the positive outcomes of free enterprise are not doing their job to behave punitively toward free riders, and that therefore they, too, must be punished.

So could economists compensate by going out of our way to punish murderers, rapists, and thieves, since we agree with ordinary folks that these are non-cooperators? Or will we then be evil for punishing such folks too much?

Arnold goes on to trip over his positivism, leaving his head in the sand: