I'm currently taking a cognitive psychology class, and will be designing and conducting a research project in the field — and I'd like to do it on human judgment, specifically heuristics and biases. I'm currently doing preliminary research to come up with a more specific topic to base my project on, and I figured Less Wrong would be the place to come to find questions about flawed human judgment. So: any ideas?

(I'll probably be using these ideas mostly as guidelines for forming my research question, since I doubt it would be academically honest to take them outright. The study will probably take the form of a questionnaire or online survey, but experimental manipulation is certainly possible and it might be possible to make use of other psych department resources.)

Any situation where people make decisions and have a theory of how they make the decision, they can be systematically wrong. This includes people predicting what they would do in a set of future circumstances.

This is an easier form of irrationality to break ground in than preferences of A>B, B>C, C>A.

Another trope in such experiments is having subjects predict things about average people and also about themselves.

One research technique to be aware of is the one where questionnaires are handed out with a die, and respondents are instructed to roll it for every answer, and always answer "yes" on a roll of six and "no" on a roll of one, regardless of the true answer to that question. I forget what it and its variants are called.

Rationality drugs. Many nootropics can increase cognitive capacity, which according to Stanovich's picture of the cognitive science of rationality, should help with performance on some rationality measures. However, good performance on many rationality measures requires not just cognitive capacity but also cognitive reflectiveness: the disposition to choose to think carefully about something and avoid bias. So: Are there drugs that increase cognitive reflectiveness / "need for cognition"?

Debiasing. I'm developing a huge, fully-referenced table of (1) thinking errors, (2) the normative models they violate, (3) their suspected causes, (4) rationality skills that can meliorate them, and (5) rationality exercises that can be used to develop those rationality skills. Filling out the whole thing is of course taking a while, and any help would be appreciated. A few places where I know there's literature but I haven't had time to summarize it yet include: how to debias framing effects, how to debias base rate neglect, and how to debias confirmation bias. (But I have, for example, already summarized everything on how to debias the planning fallacy.)

Convert numbers and rates into equivalent traits or dispositions: Convert "85% of the taxis in the city are green" to "85% of previous accidents involved drivers of green cabs". (Recent Kahneman interview)

Requisition social thinking: Convert "85%" to "85 out of 100", or "Which cards must you turn over" to "which people must you check further" (Wason test).

how to debias framing effects

Have people been trained in automatically thinking of "mortality rates" as "survival rates" and such? A good dojo game to play would be practicing thinking in terms of an opposite framing as quickly as possible, until it became pre-conscious, and one consciously became aware of what one heard and its opposite at the same time.

An enduring concern about democracies is that citizens conform too readily to the policy views of
elites in their own parties, even to the point of ignoring other information about the policies in
question. This article presents two experiments that undermine this concern, at least under one
important condition. People rarely possess even a modicum of information about policies; but when they
do, their attitudes seem to be affected at least as much by that information as by cues from party elites.
The experiments also measure the extent to which people think about policy. Contrary to many accounts,
they suggest that party cues do not inhibit such thinking. This is not cause for unbridled optimism about
citizens’ ability to make good decisions, but it is reason to be more sanguine about their ability to use
information about policy when they have it.

(Emphasis mine.)

If one knew the extent one was biased by cues, and one knew one's opinion based on cues and facts, it would be possible to calculate what one's views would be without cues.

Now have a look at a very small variation that changes everything. There are two companies in the city; they're equally large. Eighty-five percent of cab accidents involve blue cabs. Now this is not ignored. Not at all ignored. It's combined almost accurately with a base rate. You have the witness who says the opposite. What's the difference between those two cases? The difference is that when you read this one, you immediately reach the conclusion that the drivers of the blue cabs are insane, they're reckless drivers. That is true for every driver. It's a stereotype that you have formed instantly, but it's a stereotype about individuals, it is no longer a statement about the ensemble. It is a statement about individual blue drivers. We operate on that completely differently from the way that we operate on merely statistical information that that cab is drawn from that ensemble.

...

A health survey was conducted in a sample of adult males in British Columbia of all ages and occupations. "Please give your best estimate of the following values: What percentage of the men surveyed have had one or more heart attacks? The average is 18 percent. What percentage of men surveyed both are over 55 years old, and have had one or more heart attacks? And the average is 30 percent." A large majority says that the second is more probable than the first.

Here is an alternative version of that which we proposed, a health survey, same story. It was conducted in a sample of 100 adult males, so you have a number. "How many of the 100 participants have had one or more heart attacks, and how many of the 100 participants both are over 55 years old and have had one or more heart attacks?" This is radically easier. From a large majority of people making mistakes, you get to a minority of people making mistakes. Percentages are terrible; the number of people out of 100 is easy.

Regarding framing effects, one could write a computer program into which one could plug in numbers and have a decision converted into an Allais paradox.

cognitive reflectiveness: the disposition to choose to think carefully about something and avoid bias

I sometimes worry that this disposition may be more important than everything we typically think of as "rationality skills" and more important than all the specific named biases that can be isolated and published, but that it's underemphasized on LW because "I'll teach you these cool thinking skills and you'll be a strictly more awesome person" makes for a better website sales pitch than "please be cognitively reflective to the point of near-neuroticism, I guess one thing that helps is to have the relevant genes".

Probably the latter. I'm reading through links from the links from the links of what you linked to, perhaps you could list all the biases you could use help on? I think my Arieli Lindt/Hersheys solution of imposing a self penalty whenever accepting free things was a clever way of debiasing that bias (though I would think so, wouldn't I?) and in the course of reading through all kinds of these articles (in a topic I am interested in) I could provide similar things.

I really do go through a lot of this stuff independently, I had read the Bullock paper and Kahneman interview before you asked for help and only after you asked did I know I had information you wanted.

In any case my above comment was probably downvoted for it being perceived as posturing rather than because it isn't a common concern. That interpretation best explains my getting downvoted for raising the issue and you being downvoted for not taking it maximally seriously.

Are you including inducing biases as part of "debiasing"? For example, if people are generally too impulsive in spending money, a mechanism that merely made people more restrained could counteract that, but would be vulnerable to overshooting or undershooting. Here is the relevant study:

In Studies 2 and 3, we found that higher levels of bladder pressure resulted in an increased ability to resist impulsive choices in monetary decision making.

I suggest making it a separate category, at least to start with. It will be easier to recombine them into debiasing later if it turns out the distinction makes little sense and there is a range of anti-biasing from debiasing to rebiasing, than it would be to separate them after everything is filled in.

Do you know of research supporting debiasing scope insensitivity by introducing differences in kind that approximately preserve the subjective quantitative relationship? If not I will look for it, but I don't want to if you already have it at hand.

I am thinking in particular of Project Steve. Rather than counter a list of many scientists who "Dissent from Darwinism" with a list of many scientists who believe evolution works, they made a list of hundreds of scientists named Steve who believe evolution works.

Many people is approximately equal to many people in the mind, be it hundreds or thousands, but many people is fewer than many Steves. That's the theory, anyway.

Intuitively it sounds like it should work, but I don't know if there are studies supporting this.

I recently watched the talk referenced in this comment and the speaker mentioned an ongoing effort to find out which biases people correct for in their models of others and which they do not, which sounds like a promising area of research.