I have recently been corresponding with a friend who studies psychology regarding human cognition and the best underlying models for understanding it. His argument, summarized very briefly, is given by this quote:

Lastly, there has been a huge amount of research over the last two decades that shows human reasoning is 1) entirely constituted by emotion, and that it is 2) mostly unconscious and therefore out of our control. A lot of this research has seriously compromised the Bayesian point of view. I am referring to work done by Antonio Damasio, who demonstrated the essential role emotion plays in decision making (Descartes' Error), Timothy Wilson, who demonstrated the vital role of the unconscious (Strangers to Ourselves), and Jonathan Haidt, who demonstrated how moral reasoning is dictated by intuition and emotion (The Emotional Dog and its Rational Tail). I could go on and on here. I assume that you are familiar with this stuff. I'd just like to know how you who respond to this work from the point of view of your studies (in particular, those two points). I don't mean to get in a tit for tat debate here, just want the other side of the story.

I am having trouble synthesizing a response that captures the Bayesian point of view (and is sufficiently backed up by sources so that it will be useful for my friend rather than just gainsaying of the argument) because I am mostly a decision theory / probability person. Are these works of psychology and neuroscience really illustrating that human emotion governs decision making? What are some good neuroscience papers to read that deal with this, and how do Bayesians respond? It may be that everything he mentions above is a correct assessment (I don't know and don't have enough time to read the books right now), but that it is irrelevant if you want to make good decisions rather than just accept the types of decisions we already make.

ETA: Amplifying that, the cognitive flaws your friend refers to are evidence for not-3, evidence which I expect we would agree with, but this does not contradict 1 and 4, which I expect we also agree with. On the other hand, 2 and not-3 do not sit well together. If, according to 2, the brain is a machine for performing Bayesian computation, why are the results in humans so strikingly non-Bayesian?

I will reply to this a little more when I get some free time, but another paper which might be put into proposition 2 is here: < http://www.svcl.ucsd.edu/publications/journal/2009/pami09-Sal.pdf >. This, along with Bayesian surprise by Itti and Koch at USC, does a good job of showing how Bayesian models of focus-of-attention mechanisms in the mammalian brain actually perform well computationally and experimentally.

Echoing other comments, it seems important to understand what your friend means by Bayesianism. There is a specific school of psychologists called "Bayesians" who try to model human behavior using Bayesian statistics. Without knowing anything else, it seems likely that this is what your friend thinks of when he hears the word "Bayesianism", so the evidence he pointed out would indeed be strong evidence that the Bayesian school in psychology will not be able to fully explain human behavior without significantly modifying its methods (note however that Bayesian techniques could still be fundamentally important for modeling certain aspects of human behavior, see for instance RichardKennaway's links).

The essential issue seems to be here that your friend is claiming that because humans aren't perfect Bayesians that Bayesianism is somehow philosophically wrong. Whether human cognition is flawed even severely doesn't impact whether or not Bayesianism is a better approach. Note that your friend's argument if it were valid then it would apply not just to Bayesianism but any attempt to use statistics. It is pretty clear that humans pay a lot more attention to anecdotes than actual stats for example. By this argument, statistics themselves should be ignored.

One might be tempted to assume that if the human specie evolved as so blatantly non-Bayesian, yet survived and took over the world, then Bayesianism is probably incorrect. Because if it was, then surely any specie that would have evolved Bayesianism would have taken over the world instead of us. If we have this in mind, that should take care of the "be vs ought" fallacy, because what ought to be, would be.

I reject this argument however, mainly because Bayesian calculations are simply intractable. Even when they are, "Yikes! A tiger!!" is way more effective at Darwinism than the more explicit "Yellow, stripes, feline shaped, looking at me, big, danger so let's -AHRRGH CRUNCH GULP". And the fair amount of false positives that the emotional quick guess generates probably wasn't very harmful in the ancestral environment.

Because if it was, then surely any specie that would have evolved Bayesianism would have taken over the world instead of us.

Humans evolved a step towards being capable of Bayesian reasoning and we completely overthrew the natural order in an evolutionary instant. We should not expect to see close approximations of Bayesian thinking (combined with typical goal seeking behaviour) evolve because when a species gets vaguely close it becomes better at optimising than evolution is!

I believe the above comments had essentially the right idea. He is trying to say that since most human cognition is subconscious and governed by emotion (i.e. we don't have direct access to it, per se) then it's philosophically wrong to try to formulate processes for decision making in the Bayesian/decision-theoretic sense. He claims that not-3 implies that 1 is useless and 4 will only give incorrect results, where I've borrowed the numbered propositions from RichardKennaway's comment.

He claims that not-3 implies that 1 is useless and 4 will only give incorrect results

By that, it seems like he is saying that being right is useless, and that the right answer yields incorrect results... that first bit would be rather horrible if true, and the last bit is just silly outside of the context of trying to nail down exactly how and why not-3 is true.

Could you rephrase 'philosophically wrong' (as your friend means it) more clearly? I don't want to attack a straw man, but it seems like it could be going in a ridiculous direction.

I think he is trying to say that it's more important to understand how humans currently perform cognitive tasks than to try to formulate a robust way to perform cognitive tasks that has mathematical guarantees about accuracy. This same friend has some vague, new-agey type beliefs about "emotions" and I think he takes the evidence about emotions being major players in decision making a little too far. He sees attempts to quantify what he calls "human decision making" with the same tools that we use for "computational decision making" as fruitless. For him, aesthetics, for example, cannot be analyzed down to the level of preference orderings, utility functions, and Bayesian decision rules. Aesthetics simply are what they are, directly encoded by emotions and not "understandable" in any symbolic sense.

I don't think he would object to using Bayesian reasoning to make the best choices for, say, financial planning. But if you claimed that future neuroscientists will ever be able to quantitatively understand what "love" is (or even consciousness), he would reply that since human cognition is sub-conscious and non-Bayesian, you'll never explain it in a way such that knowledge of Bayesian decision theory can help a human make "human decisions" any better than their emotions would by default.

I think the problem is that he makes a dichotomy between "human computation" and "machine computation" (owing maybe to the fact that his background is in philosophy and psychology, with a lot less emphasis on the math and physics relevant to cognition). Not only this, but he then further claims that neuroscience evidence in favor of emotions as major players in human cognition is also in favor of treating the two sides of his dichotomy with fundamentally different tools, and that Bayesian reasoning is not a successful normative model for human computation. To him, comparing the goodness of a Bayesian decision in place of a human-emotion decision is a "silly" thing to do.

Yes, I already try to adhere to that. The reason I came to LW to ask around was because I just wanted to make a succinct reply containing some relevant reading materials and a short summary of the error going on. Emotions do play a significant role in human cognition, and we are not by a long stretch good Bayesian reasoners. But there's no special reason why we have to treat emotions and our current modes of cognition as if they are innately good or fundamentally not understandable. Some evidence supports this as mentioned in many of the comments above. I'm very grateful to have LW as a resource when it comes to cases like this. I think my own explanation of all of the above would probably have been mostly "correct" but horribly imprecise and nebulously flowing around lots of peripheral topics that won't directly bear any fruit.

If your friend knows that you study Bayesian reasoning, why would he expect point 1 to be news to you? Bayesian statistics, etc. deserve conscious study in part because they require conscious study. If we already reasoned perfectly there'd be no rational reason to try and improve it.

Point 2 is more interesting, but only because it superficially resembles a much more serious concern. If psychologists are sure that our reasoning "is mostly unconscious", that's still not the same as saying that it "must always be mostly unconscious despite conscious attempts at intervention". The former proposition just means that part of instrumental rationality needs to be learning how to identify the most critical decisions to treat with less intuitive and more formal reasoning. The latter proposition would mean that studying rationality is pointless to begin with, but it would also mean that studying engineering is no better than "guesstimate the cathedral designs and keep the results that don't collapse". Clearly we're capable of improving over unstudied human reasoning in some limited forms of decision making, and it makes sense to try and push those limits out as far as we can.

And while it may sound off the wall to someone like your friend outside the context of this site: even if some techniques for rationally improving decision-making are too hard for an un-assisted human brain to implement, that's just similar to the fact that many techniques for modern industry are too hard for un-assisted human muscle. That just makes them more worth study, as a necessary prerequisite for figuring out how to design the assistance.

I don't understand how such facts about humans can have any bearing on the correctness of Bayesian probability. An analogy: humans are also bad at mental arithmetic, but that doesn't mean math is wrong and we ought to replace it with something more emotional.