Thursday, 27 April 2017

In this post I am pleased to interview Thomas Sturm (pictured below), ICREA Research Professor at the Department of Philosophy at the Universitat Autónoma de Barcelona (UAB) and member of the UAB's Center for History of Science (CEHIC). His research centers on the relation between philosophy and psychology, including their history. Here, we discuss his views on empirical research on human rationality.

AP: The psychology of judgment and decision-making has been divided into what appear to be radically different perspectives on human rationality. Whilst research programs like heuristics and biases have been associated with a rather bleak picture of human rationality, Gerd Gigerenzer and his colleagues have argued that very simple heuristics can make us smart. Yet, some philosophers have also argued that, upon close scrutiny, these research programs do not share any real disagreement. What is your take on the so-called “rationality wars” in psychology?

TS: Let me begin with a terminological remark. I would like to refrain from further using the terminology of “rationality wars”. It was introduced by Richard Samuels, Stephen Stich, and Michael Bishop (SSB hereafter) in 2002, and I have used their expression too without criticizing it. In academic circles, we may think that such language creates no problems, and I hate to spoil the fun. But because theories of rationality have such a broad significance in science and society, there is a responsibility to educate the public, and not to hype things. Researchers are not at war with one another. Insofar as a dispute becomes heated, if fights for funding and recognition play a role, then we should speak openly about this, tame down our language, and not create a show or a hype. We should discuss matters clearly and critically, end of story.

Now, I study this debate, which has many aspects, with fascination. It’s fascinating because they concern a most important concept of science and social life, adding fresh perspectives to philosophical debates that have occasionally become too sterile. And the debates are so interesting because they provide ample materials for philosophy of science.

Tuesday, 25 April 2017

My name is Peter Pantelis. I study “theory of mind”—our ability to reason about other people’s mental states. Years ago, I became interested in an economic game called the Beauty Contest, because I think it taps into theory of mind very elegantly:

You are going to play a game (against 250 undergraduate psychology students). Each player will submit a whole number from 0 to 100. The winner will be the player whose number is closest to 2/3 of the mean number selected by all the players.

What number do you submit?

(I’ll wait for you to think about it for a moment)
What number should you submit, and why? Game theory says the rational strategy is for you to say 0—and so should everyone else. That’s what economists call the Nash equilibrium.[1] But in practice, virtually nobody submits the “rational” choice of 0. The average number selection is usually something like 25-35.

People also give a wide variety of responses, and interpreting this (non-normative) pattern is where the behavioral economists come in. Nagel (1995) and Camerer and Fehr (2006) have modeled this task in ways that both fit the empirical data well and capture the various intuitions people bring to this game. Their papers are some of my favorites, and I summarize their approaches in our recently published Cognition paper.

They posit that players bring different levels of strategic sophistication to this game. Although many players answer completely randomly or arbitrarily, for most players, a critical aspect of deciding on what number to select is making a sensible prediction of what numbers the other players are likely to select. And to sensibly do so you must estimate just how sophisticated your opponents are, and possibly what beliefs they also may hold about you.

If that doesn’t engage your theory of mind ability, I don’t know what does.

Thursday, 20 April 2017

We are very excited that on
5th May 2017 Project PERFECT will be holding its second annual workshop, at Jesus College, Cambridge. The workshop will feature leading
experts in the field of philosophy of memory. The talks will focus on a
wide-range of fascinating issues that dominate contemporary research on memory.
The talks will be of interest to philosophers of mind, philosophers of
psychology, epistemologists and psychologists, as well as other cognitive
scientists interested in how we remember the past.

Issues to be covered in the
talks include how memory can generate knowledge; how false and distorted
memories can be useful features of ordinary cognition; the nature of
experiential memories; whether we can be immune from error due to
misidentifying ourselves in a memory; and the role of shared memories in relationships.

Many of the talks will have
an interdisciplinary angle, highlighting how recent psychological research—e.g.
on false and distorted memory, and dementia and grief—should impact on our
understanding of human memory.

Two of the talks will focus
directly on a concept at the very heart of Project PERFECT: i.e. epistemic innocence. This is the idea
that some false and misleading cognitions bring epistemic benefits that could
not be possessed in the absence of the cognitions.

Kirk Michaelian will examine
the claim that memory can generate new knowledge. He will explore two views
that are consistent with this claim, arguing that the views, when combined,
support the claim that episodic memories (our memories of individual incidents)
are misleading but in a way that makes them epistemically
innocent.

On a similar theme, I will present work written in
collaboration with Lisa Bortolotti showing that three memory distortions
famously studied in the psychological literature can be explained in terms of
the presence of cognitive mechanisms that are epistemically innocent.

Dorothea Debus will explore the nature of memories with
experiential qualities. She will argue that we give this type of memory special
weight, and she will illustrate how we are both passive and active with respect
to these memories. We are active because we can prompt ourselves and others to
remember events. We are passive because the memories often just come to us.

Jordi Fernández will examine the
claim that one cannot have an inaccurate memory as a result of misidentifying
oneself in the memory. He will consider how psychological research on observer
memories (when people seem to recall a scene in which they featured from the
perspective of an observer) and disowned memory might be taken to challenge the
claim. Then he will respond to the challenge by drawing on the same psychological
research to offer a positive view in support of the target claim.

John Sutton will focus on how the ways people have shared
memories that are reflected in and can come to constitute specific close
relationships. He will focus on both ongoing relationships and the end of
relationships. He will draw on psychological studies on the role of memory in
dementia and grief.

Bounded rationality has been a hard-to-digest notion in economics and the other social sciences since its introduction by Herbert A. Simon in the middle of the last century. How could ‘rationality’ be ‘bounded’? And – as a typically related concern – would this imply that social sciences should abandon any normative horizon, giving the way to an unappealable ‘irrationality’?

Thursday, 13 April 2017

Sometimes, we are most forcibly struck by what isn’t there.
If I play you a series of regularly spaced tones, then omit a tone, your
perceptual world takes on a deeply puzzling shape. It is a world marked by an
absence – and not just any old absence. What you experience is a very specific
absence: the absence of that very tone, at that very moment. What kind of
neural and (more generally) mental machinery makes this possible?

There is an answer that has emerged many times during the
history of the sciences of the mind. That answer, appearing recently in what is
arguably its most comprehensive and persuasive form to date, depicts brains as
prediction machines – complex multi-level systems forever trying pre-emptively
to guess at the flow of information washing across their many sensory
surfaces.

According to this emerging
class of models, biological brains are constantly active, trying to predict the
streams of sensory stimulation before they arrive. Systems like that are most
strongly impacted by sensed deviations from their predicted states. It is these
deviations from predicted states (‘prediction errors’) that here bear much of
the explanatory and information-processing burden, informing us of what is
salient and newsworthy in the current sensory array. When you walk back into
your office and see that steaming coffee-cup on the desk in front of you, your
perceptual experience (the theory claims) reflects the multi-level neural guess
that best reduces prediction errors. To visually perceive the scene, your brain
attempts to predict the scene, allowing the ensuing error (mismatch) signals to
refine its guessing until a kind of equilibrium is achieved.

Perception here phases seamlessly into understanding. What
we see is constantly informed by what we know and what we were thus already
busy (both consciously and non-consciously) expecting. Perception and
imagination likewise emerge as tightly linked, since to perceive the world is
to deploy multi-level neural machinery capable of generating a kind of ‘virtual
version’ of the sensory signal for itself, using what the system knows about
the world. Indeed, so strong is the tie that perception itself becomes a matter
of what some theorists have called ‘controlled hallucination’.

My paper defends a specific method of evaluating rationality. The method is general and can be applied to choices, inferences, probabilistic estimates, argumentation, etc., but I’ll explain it here through one example. Suppose I’m worried about my friend Alex’s beliefs regarding current affairs. Her claims often seem far-fetched and poorly supported by evidence. As rationality experts who want to help, how should we evaluate Alex?

I embrace several components of the “ecological rationality” research program, which many readers will know from other posts. First, it’s important to move beyond particular beliefs and evaluate strategies that Alex uses or could use (“teach a man to fish …”). The starting point should be her present strategy, which I’ll call FB: Alex skims her Facebook feed and forms many beliefs primarily on the basis of news headlines there.

We probably all think FB is a terrible strategy, but we shouldn’t just tell Alex that and call it a day; it’s important to compare FB to some of her alternatives. For example, Alex could skim both her own feed and her (more conservative) brother’s, or she could watch TV news. Making concrete comparisons focuses our attention on what improvements are possible for Alex and doesn’t require us to posit – and identify – a sharp boundary between “rational” and “irrational” strategies.

Thursday, 6 April 2017

On the 25th and 26th of January 2017 the University of Sheffield hosted the 3rd in a series of 4 conferences on Bias in Context. This workshop was supported by the Leverhulme Trust as part of a research project grant on bias and blame. The previous two conferences in the series had focused on how to understand the relationship between psychological and structural explanations. This time the theme was Interpersonal Interventions and Collective Action. The goal was to look beyond individualistic approaches to changing biases and examine how interpersonal interactions and collective action can be used to combat bias.
Experts came from both Philosophy and Psychology and many of those attending also had practical experience of leading diversity training sessions.

The conference began with Dr Evelyn Carter (UCLA) giving a talk about her ongoing research into applying theories of motivation to confronting bias. She argued that it is crucial that we always confront bias because speaking up sets norms. Across a number of studies her research team have found that feedback drawing attention to and condemning bias makes people more favourable to anti-prejudice. Their research indicates that both high and low confrontation feedback can be effective in promoting change.

After lunch Dr Gabriella Beckles-Raymond (Canterbury) talked about developing an ethics of social transformation. She argued that we are more aware of our biases, and in particular what causes them, than is typically assumed. She argued that we cannot use ‘implicit’ as an excuse that we can’t act or our society as an excuse that we are powerless. Instead we must move away from a focus that sees bias as a problem for individuals, to be solved by individuals and use the ethics of empathy to address the deeper social, societal and structural problems.

Then Dr Robin Scaife (Sheffield) presented the findings from a series of experiments examining the effects of administering in person blame for implicit bias. The results indicate that, contra common assumptions about blaming increasing bias or making people resistant to change, the communication of disapprobation for the manifestation of implicit bias has potential benefits and no costs. Those who had been blamed showed similar or slightly reduced levels of implicit bias and had significantly stronger explicit intentions to change their future behaviour than those who had not.

This was followed by Dr Rosa Terlazzo (Kansas State) who discussed the idea that victims have a duty to other victims to resist their oppression. She argued that if this duty is to end the harm caused by oppressive norms then this is beyond their power, but if the duty is merely not to contribute to the harms then this will do little to limit oppression. Terlazzo argued that instead we should understand victims to have a duty to act as counter-stereotypic individuals in order to weaken the self-regarding biases experienced by other victims and thereby mitigating but not ending the harms of oppressive norms.

The first day of the conference ended with drinks and dinner which provided a great opportunity for all participants to discuss and share their perspectives on bias.

The second day of the conference began with Dr Yannig Luthra (UCLA) on social prejudice, co-authored with Dr Cristina Borgoni (Graz). He presented several arguments in favour of the claim that an individual counts as violating norms of epistemic and practical rationality directly in virtue of drawing from epistemic and practical problems in her social context. The central idea is that rational life is social in much the same way it is temporal. Your view can be an extension of the view of others in the same way it can be an extension of your own past perspective. In both cases one can be implicated for importing rational failings. However, the diagnosis of the wrong must ultimately be with the social sources of the individual’s bias.

Then Dr Joseph Kisolo-Ssonko, (Nottingham) talked about collective intentionality, bias and constituting a ‘we’. He argued that our capacity to think of ourselves as a ‘we’ is not the voluntary choice it is often presented as being. Instead it is underwritten by normatively loaded social and structural biases and power structures. Because of this he concludes that biases do not just cause us to act irrationally on a pre-existing social stage. Rather, they also found what counts for us as collectively rational.

Professor Sally Haslanger (MIT) gave the final talk of the conferenced titled: ‘If racism is the answer, what is the question?’ She claimed that racism is best understood as a homeostatic system where racism is constituted by the systematic looping of schemas and resources. Practices distribute things of value and disvalue but in turn we learn about what different races “deserve” by looking around us at the result of these practices. Haslanger argued that to end racism we have to stop the systematic looping by dismantle society as we know it and that in achieving this end changing attitudes should not be the highest priority because other methods of intervention are likely to be more efficacious.

The conferences concluded with Dr Jules Holroyd (Sheffield) and Dr Erin Beeghly (Utah) chairing a round table discussion. Much of the discussion focused on how to resist and combat the way that recent election results in both the USA and UK have been perceived as legitimising prejudice. Lacey Davidson (Purdue) made the exciting announced that she has been awarded a Global Synergy Grant to transform Jenny Saul’s bias project website (www.biasproject.org) into an ongoing bias web resource. There were lots of promising suggestions for features which could make up part of this resource. Keep an eye out for developments on that front.

The fourth & final conference in the bias in context series will take place on the 12th and 13th of October at the University of Utah. The full program, details, and call for abstracts for the poster session, will soon be/is available at www.biasincontext4.weebly.com

Tuesday, 4 April 2017

Vasco Correia (pictured above) is currently a Research Fellow at the Nova Institute of Philosophy (Universidade Nova de Lisboa), where he is developing a project on cognitive biases in argumentation and decision-making. In this post, he summarises a paper he recently published in Topoi.

This paper is an attempt to show that there are reasons to remain optimistic—albeit cautiously—regarding our ability to counteract cognitive biases. Although most authors agree that biases should be mitigated, there is controversy about which debiasing methods are the most effective. Until recently, the notion that critical thinking is effective in preventing biases appealed to many philosophers and argumentation theorists. It was assumed that raising awareness of biases and teaching critical thinking to students would suffice to enhance open-mindedness and impartiality. Yet the benefits of such programs are difficult to demonstrate empirically, and some authors now claim that critical thinking is by and large ineffective against biases.

We know that people have a tendency to expect that their future will be better than that of others or better than seems likely on an objective measure of probability. But are they really expressing a belief that the future will be good, or should we see these expressions of optimism as hopes or possibly even just expression of desires for the future? Maybe when I say ‘My marriage has an 85% likelihood of lasting ‘til death do us part’’, what I am actually saying is ‘I really, really want my marriage to last.’ If what is expressed is a desire rather than a belief, we do not need to worry that we are systematically mistaken in our beliefs in the future and that our expectations for our future are insufficiently sensitive to the evidence we have for what is likely to happen. In the paper, we argue that expressions of unrealistic optimism are indeed what they seem to be on the surface, beliefs about what is likely to occur. The fact that optimistic expectations are frequently not well supported by the evidence is a feature that they share with many other beliefs, as we humans are not ideally rational in our belief formation.

Lisa Bortolotti

By definition, unrealistic optimism is a phenomenon that shows us to be insufficiently in touch with reality. However, establishing that we are in fact making an error when assessing the likelihood of future outcomes is surprisingly difficult. In some cases, whether an expectation is correct or not can only be established post factum. Only at the end of the Euro 2016 could we say that Ronaldo’s belief that Portugal would win the European cup had been correct (if indeed he had this belief). Things are more complicated if what we know is that Ronaldo believed that Portugal had a 95% likelihood of winning the European cup. Is this belief validated by the fact that Portugal did win? Not necessarily, as his likelihood estimate may still have been too high given some objective measure of likelihood. Furthermore, it cannot be the case that probabilistic risk estimates are proven or disproven by later outcomes. Otherwise, any risk estimate which isn’t either 0 or 1 will automatically be incorrect, it is just impossible to say whether the error lay in being too optimistic or pessimistic before the actual outcome ensues.

Bojana Kuzmanovic

But the question of whether an individual’s optimistic beliefs are false is in many ways less pressing than the question whether the individual is justified in holding that belief given the evidence available to them. Are unrealistically optimistic beliefs epistemically irrational because they do not take into account available evidence either when the individual forms the belief or when they maintain their belief?