Tuesday, 31 July 2018

Colin Klein works on the philosophy of neuroscience at The Australian National University, and is interested in delusions and related phenomena.

Colin Klein

Peter Clutton is a graduate student in philosophy at The Australian National University, working on delusions and beliefs. is a Postdoctoral Research Fellow in the Department of Cognitive Science at Macquarie University, interested in belief formation, self representation, and altered states of consciousness.

Peter Clutton

Vince Polito is a Postdoctoral Research Fellow in the Department of Cognitive Science at Macquarie University, interested in belief formation, self representation, and altered states of consciousness.

Vince Polito

Conspiracy theorists are often thought to be distinctively irrational. When you picture a conspiracy theorist, you might imagine someone scouring the internet and joining dots between seemingly unrelated events, constructing a grand web of interconnected conspiracies in order to explain the mundane chaos of everyday life. Intuitively, it seems, there must be some fundamental epistemic or psychological error behind such activity.

For example, it has sometimes been claimed that conspiracy theorists possess a “monological” belief system, in which belief in one conspiracy leads to belief in others, until eventually a person explains every significant event, however unrelated, through the same conspiracy “logic”. This conception of conspiracy theorists has also influenced the philosophical and psychological literature on delusions.

As philosophers and cognitive scientists interested in rationality, beliefs, and delusions, we found this picture highly suspect. Surely there can be many ways into conspiracy beliefs, just as there can be many ways into other kinds of beliefs. Perhaps the “monological” view arises from a selection bias: typical “monological” conspiracy theorists do exist, but their voluminous, florid outpourings tend to stand out more, obscuring a greater heterogeneity among conspiracy believers generally.

Thursday, 26 July 2018

This year’s Minorities and Philosophy (MAP) @ Bristol conference theme was ‘Public Philosophy’. We hosted a number of talks exploring the conceptual and practical issues related to the idea of philosophy as a ‘public’ endeavour. Four current Philosophy PhD students are responsible for organizing the event: Chengxiao Dang, Chia-Hung Huang, Ji-Young Lee, and Denise Vargiu. We would also like to acknowledge Minorities and Philosophy, The Marc Sanders Foundation, and the University of Bristol Philosophy Department for kindly supporting this event.

We commenced our morning session with a talk from Jane Gatley, on justifying philosophy in secondary schools. She discussed how justifying teaching philosophy through the positive benefits associated with the P4C movement risked ‘conflating claims about philosophy with claims about the distinctive P4C pedagogy’. The benefits attached to P4C might have more to do with dialogue and child-centered learning, rather than philosophical content. She also suggested that we need greater justification for bringing philosophy into the secondary education curriculum, and that arguing for philosophy to take up valuable school time would require careful tandem consideration of the aims of secondary education.

Our second speaker, Chia-Hung, explored the question of how academics ought to participate in ‘public philosophy’. Against the more widely held view that philosophy accessible to the general public helps them become ‘better citizens’ and enhance their critical thinking skills, he claimed that exposure to 'public philosophy' tends to be too unstructured and sporadic to be helpful. Instead, it is the civil duty of academically trained philosophers to dispense with their philosophical expertise by way of individual consultations on matters that capture their recipients’ interest.

Next, Professor Chris Bertram gave us a presentation on the ways that hate speech and immigration policies contribute to what he terms a ‘hostile environment’. Drawing on philosophy of language, he discussed some of the very visible and public ways that this hostile environment manifests in day to day life. It is, as he mentioned, not just politicians and the state at large who contribute - ordinary citizens who occupy roles as employer, landlord, etc. are made complicit in the perpetuation of this hostile environment. His claim was that this hostile environment constitutes a form of exclusion and discrimination, depriving many immigrants of full citizenship.

In her 2015 “Social Structure, Narrative, and Explanation,” Sally Haslanger raises a highly influential critique of the philosophical literature on implicit bias. Her target is a kind of explanatory and normative individualism: she argues that implicit bias is neither necessary nor sufficient for explaining ongoing injustice; and that it wrongly locates the badness of injustice in individuals rather than social structures.

She demonstrates this with several examples in which inequalities arise even when it is stipulated that no one has racist, sexist, or inegalitarian attitudes of any kind: a husband and wife whose decision to make her the primary caregiver (because only she receives parental leave) results in unequal incomes, a teacher whose fair disciplining of a Black student leads him and his non-White friends (because they have experienced long patterns of racism) to disengage and perform badly in her classroom, and a worker who loses his job due to the city’s cancelling (because they are strapped for cash) his bus route to work.

One of things I do in my paper is defend a certain kind of normative individualism by arguing that we need a theory of individual responsibility in order to hold particular persons accountable for the day-to-day work of collectively organizing to transform social structures. For example, if a political ally of mine makes an implicitly biased remark, I face the immediate problem of how to feel and respond to that particular person – should I call them out? Should I modify my beliefs or attitudes toward them? Should I continue working with them? A theory of individual responsibility can offer me some guidance in answering these live, practical questions in ways that a structural theory might not.

In the paper I also propose a theory of implicit bias that draws on Pierre Bourdieu’s work on social structures. My suggestion is that we can understand implicit biases to be themselves a type of social structure. The two key Bourdieusian concepts here are field and habitus. Bourdieu conceives of social structures as “fields,” that is, as configurations of relationships between social positions.

Agents occupy different positions in the field according to how much and what sorts of capital (i.e., social, material, and cultural resources) they possess. Over time, an agent will acquire a “habitus” from the field, which Bourdieu describes as “schemes of perception, thought and action [that] tend to guarantee the ‘correctness’ of practices and their constancy over time.” There is a kind of mutually reinforcing fit between field and habitus; and while habitus is “deposited” in agents by a field, that field persists only insofar as people remain invested in acting in accordance with its rules. According to Bourdieu:

Social reality exists, so to speak, twice, in things and in minds, in fields and in habitus, outside and inside of agents. And when habitus encounters a social world of which it is the product, it is like a “fish in water”: it does not feel the weight of the water, and it takes the world about itself for granted. . . . It is because this world has produced me, because it has produced the categories of thought that I apply to it, that it appears to me as self-evident.

This seems to me like a wonderfully apt description of implicit social cognition. In other words, implicit biases are not ordinary “attitudes.” They are the thing in our heads that “fits” us in to social structures – that lock us into forms of behavior which sustain those structures over time, because they are themselves a species of micro-level social structure that interlocks with the macro-level field.

As Omar Lizardo puts it: “The habitus is itself an objective structure albeit one located at a different ontological level and subject to different laws of functioning than the more traditional ‘structure’ represented by the field” (emphasis mine). This is what it means for social reality to exist “twice.”

The upshot, I argue, is that part of the work of structural transformation begins from the inside out, with the construction of new habituses (i.e. new social structures) that serve to challenge existing injustice. Those of us committed to working together in the long run for a radical transformation will need practices of self-reflexive criticism and constant inspection – our “habit-busting habits” – to become second nature.

That is, we will need to develop “radical habitus” or “habitus of resistance” (Clarke 2000) alongside anti-oppressive fields that cultivate them. Becoming aware of our own implicit biases and putting measures in place to block them, I believe, is an important part of this process.

Organised by Elizabeth Barrett (Consultant in Liaison Child and Adolescent Psychiatry at Children’s
University Hospital) and Melissa Dickson (Lecturer in Victorian Literature at the
University of Birmingham) the conference, and project more generally, focuses
on two simple questions: Do doctors and patients speak the same language, and
how can we use literature to bridge the evident gaps? In what follows, I
summarise just some of the talks and workshop sessions.

How do cultural norms and expectations shape diagnosis and the experience of illness? Melissa Dickson showed us that, in 19th Century Britain, there were multiple literary and medical accounts of a psychosis-like state brought about by…green tea. It was an unfamiliar substance from a culture about which many British people were suspicious, and which, unlike black tea, did not arrive through an established colonial trade route. Following this, and other examples (my all-time favourite being “bicycle face”) we were encouraged to think about how contemporary cultural expectations might shape experiences and clinical practices around different illnesses.

Tuesday, 17 July 2018

Sophie Keeling is currently a philosophy PhD student at the University of Southampton. She primarily works on self-knowledge which has allowed her to research a range of topics in epistemology, philosophy of mind, and philosophy of psychology. Sophie’s thesis argues that we have a distinctive way of knowing why we have our attitudes and perform actions that observers lack. She gives a brief overview here.

Confabulation is motivated by the desire to have fulfilled a rational obligation to knowledgeably explain our attitudes by reference to motivating reasons.

(Following others in the epistemological literature, I term the reason for which we hold an attitude our ‘motivating reason’ for it).

I shan’t seek to define confabulation here (a task in its own right) but instead note the subtype I’ll explain. I’m interested in cases whereby subjects falsely explain their attitudes (e.g. beliefs, desires, preferences) in response to prompting. We see a paradigm example of this in Nisbett and Wilson’s (1977) experiment in which they arranged four pairs of identical stockings on a table and asked individuals which they preferred and why. Subjects picked a pair generally towards the right of the table. Instead of noting the real cause of their preference – the position of the tights – or admitting ignorance, subjects gave incorrect explanations. That is, they confabulated an answer, such as the pair’s supposedly superior ‘knit, sheerness, and weave’. Indeed, this is a commonplace phenomenon. We’ve all at one point adopted a stance which we’ve rationalised after the fact. (E.g. I kid myself that I prefer the expensive branded yogurt over the supermarket offering because it’s tastier, and nothing to do with the clever marketing).

The paper then introduces three explananda for our explanation of this phenomenon, and argues that the two main options in the literature fail to account for all these. For example, confabulation is first-personal – we make these sorts of mistakes more readily with ourselves than others. (Here I draw on work such as Pronin et al. 2002 concerning the ‘bias blind spot’). Yet some accounts (e.g. Nisbett and Wilson 1977, Carruthers 2013, and Cassam 2014) struggle to address this important asymmetry in our mistaken self-ascriptions.

I propose an explanation which does account for all three explananda. It appeals to what I call the knowledgeable reasons explanation (KRE) obligation:

The obligation to knowledgeably self-ascribe motivating reasons when explaining one’s own attitude.

We shouldn’t confuse this rational obligation with moral ones. It just captures the thought that I ought to, for example, explain my belief that it will rain by citing a motivating reason, such as the weather forecast. That we bear the KRE obligation is independently plausible: I seem to be doing something irrational and criticisable if I instead answer the question ‘why?’ with ‘no reason’, ‘I don’t know’ or ‘I’m generally pessimistic’.

I use KRE in the following explanation:

We confabulate, and indeed confabulate with the content we do, because we desire to have fulfilled the KRE obligation (i.e. the obligation to knowledgeably explain our attitudes by reference to motivating reasons)

We can now explain the stockings experiment in the following way. The desire to have fulfilled the KRE obligation leads the subjects to confabulate an answer in the absence of a true one they can provide – they did not form their preference on the basis of reasons. And further, they specifically self-ascribe the reason that the stockings were sheerer, say, because it is a plausible motivating reason. This proposal accounts for the explananda in a non-ad-hoc way. For example, confabulation is first-personal because we desire to have fulfilled the obligation to knowledgeably explain our own attitudes by reference to motivating reasons, not other people’s.

The final section raises an upshot for understanding self-knowledge. Contrary to popular assumption, confabulation cases give us reason to think we have distinctive access to why we have our attitudes. What exactly our special access amounts to, though, must be left for further papers!

Thursday, 12 July 2018

On 10th and 11th May in Senate House London Michael Hannon and Robin McKenna hosted a two-day conference on Political Epistemology, supported by the Mind Association, the Institute of Philosophy, and the Aristotelian Society. In this report I focus on two talks that addressed themes relevant to project PERFECT.

Robert Talisse

On day 1, Robert Talisse explained what is troubling with polarisation. In the past Talisse developed an account of the epistemic value of democracy in terms of epistemic aspirations (rather than democratic outcomes). In a slogan, "the ethics of belief lends support to the ethos of democracy". We can see this when we think about polarisation.

There are two senses of polarisation: (1) political polarisation and (2) belief (or group) polarisation. Political polarisation is the dropping out of the middle ground between opposed ideological stances. That means that opposed stances have fewer opportunities to engage in productive conversations. Belief polarisation instead is something that happens in like-minded political groups and concerns the doxastic content of people's beliefs. People tend to adopt a more extreme version of the belief they originally have when they discuss the content with like-minded people.

The problem is that the radicalisation of one's views does not depend on acquiring more or better reasons for one's original views, but on the social dynamics that is relevant to group discussion. Should people then discuss their views only with their opponents? Not really, as empirical evidence suggests that heterogeneous deliberation inhibits political participation.

What is wrong about belief polarisation and how can we address the problem? Belief polarisation impacts not only the content of the belief or the confidence about the belief, but one's estimation of the people who have opposed beliefs. So the belief-polarised person becomes increasingly unable to see nuances in the opposing view. Moreover, more and more of the behaviours of the opponents are seen in the light of their political views, and the opponents are seen as diseased or corrupted.

Finally, once the belief-polarised person knows that an expert has a different political view, then the opinion of the expert is rejected, even if the expert advice does not concern their political stance. Almost as if a sense of ideological purity compromises people's capacity to trust experts with different political views.

How can we overcome such challenges? Preventing belief polarisation is different from depolarising beliefs. More democracy may be good for prevention of belief polarisation. But once people are belief-polarised then more democracy does not seem to help. Maybe we sometimes need less democracy! Exposure to the other side entrenches polarisation.

A range of non-political behaviours and social spaces (consumer behaviours, community centres, workplaces, religious affiliations) become expressions of ideological stances which means that people are less and less likely to mix with people who have opposed political views. Humanising interactions across political divides are increasingly less likely to happen. This is due to the political saturation of social space.

So one possible solution is to carve out social spaces that are not already politically saturated. There must be activities where political affiliations do not matter.

In the 50s and 60s, women joining the U.S. workforce and members of the broader society in some sense lacked the conceptual skills for making sense of sexual harassment. Through the practices of the U.S. women’s liberation movement, especially through the organization of consciousness raising groups and speak-outs in the early 70s, feminists developed these resources and encoded them into the legal system. According to Miranda Fricker this is an instance of recognizing and to some extent addressing a problem of hermeneutical injustice, which occurs when members of a certain group are unjustly prevented from developing and distributing important conceptual skills. But what does it mean to lack conceptual skill of this kind and how do we develop new skills and overcome the hermeneutical injustice?

Suppose that John and Javier witness the inappropriate behavior of a fellow colleague toward another, but only Javier has the relevant conceptual skills for making sense of sexual harassment. This means he will be reliably and resiliently able to arrive at correct judgements about what occurred and make further judgements about its significance. The latter allows him to engage in personally and politically important projects like connecting sexual harassment with workplace discrimination and gender-based oppression and so on.

In a community full of individuals like John, however, these skills will not be readily available, and that will not only prevent the problem from being addressed but also in some cases even prevent victims from fully understanding their own experience. In order to transform a world full of Johns into a world full of Javiers, marginalized individuals need to come together to develop and then distribute new conceptual skills. In the present case, women who were victims accomplished this by giving a name to their shared experience and using it as a tool for developing and then marketing the concept.

To develop a skill in general is to engage in practices of self-regulating one’s performances within a task domain, and this is to engage in intelligently guided practices of trial and error. By settling on the term “sexual harassment”, women first of all simplify this practice by creating a perceptual cue that primes categorization by activating top-down expectations about the perceptual environment.

Secondly, by encoding the experience in a public linguistic format they aid metacognition by providing a stable resource for further reflection and experimentation. In marketing the concept, however, marginalized individuals will often face resistant perspectives and an unequal distribution of epistemic power, and this is where the problem of what Gaile Pohlhaus calls willful hermeneutical ignorance shows up. Dominantly situated knowers may lack the incentive and conceptual background required to engage in the learning process that leads to reliable and resilient seeing. Moreover, by refusing to gain the relevant conceptual skills, the dominantly situated individual assures that they will continue to lack evidence of their cognitive deficiency, which they can then use to further justify their refusal to learn the skills. Further research might expand on the work of Jóse Medina to develop strategies for overcoming this specific form of meta-ignorance.

Thursday, 5 July 2018

First speakers of the day were Lisa Bortolotti and Sophie Stammers from project PERFECT who presented a picture of confabulation where clinical and non-clinical cases are continuous and have a similar structure.

Bortolotti talked about epistemic costs and benefits of confabulation. She argued that we should distinguish between innocent and guilty instances of confabulation depending on whether the person confabulating has access to the information that ground an epistemically less problematic explanation and on whether the ill-groundedness of the explanation spreads to the person's further beliefs.

Stammers focused on the question why we confabulate. Do we aim to provide a causal theory about what is going on—as recently was argued by Max Coltheart? Or are we imposing meaning and attempt to develop a narrative understanding of the relevant events—as suggested by Örulv and Hydén? She argues that both accounts get something right about confabulation.

Sophie Stammers

Andrew Spear(Grand Valley State University) discussed the phenomenon of gaslighting as an instance of confabulation which is not epistemically innocent because (1) it does not make the acquisition of true beliefs more likely and (2) it does not enhance the coherence of the self-concept.

In gaslighting both the perpetrator (gaslighter) and the victim confabulate. The core feature of the phenomenon is that the gaslighter undermines the victim’s self-trust. Such a goal is pursued by manipulating and deceiving. The motive of the gaslighting is to destroy the possibility of disagreement in order to challenge the victim’s perception of herself as a locus of autonomy.

Spear argued that all gaslighting has an epistemic dimension. The method of the gaslighter involves providing false but compelling evidence for the victim's lack of understanding. The victim needs to decide whether the gaslighter is more trustworhty than her own cognitive faculties.

The gaslighter tells himself and the victim a story to cover up his real motivations: “This is really the best thing for her”. The victim tells herself a story about the gaslighter having her best interests at heart. This creates an epistemically poisonous feedback loop. In this case, then, Spear argued, the confabulatory explanations victims and perpetrators engage in are not epistemically innocent because they do not deliver any epistemic benefit.

Andrew Spear

Anna Ichino (Bar Ilan University) focused on the form of confabulation that occurs in superstitious or magical thinking and in conspiracy theories. Superstitious thinking departs from scientific thinking (e.g. does not rule out action at a distance) and sees meanings, reasons, and agency where there is none. Core features of confabulation are falsity or ill-groundedness, lack of decitful intentions, motivational elements, gap-filling role. Superstitious thinking shares these four core features: beliefs or practices are ill-grounded, but there is no intention to deceive.

People have motivational reasons to confabulate: (a) they are motivated to confabulate rather than saying “I don’t know”, and (b) they are motivate to form a confabulation with a specific content (e.g. explanation that implies that one is competent). Motivation of type ‘a’ is related to gap filling.

Superstitions and confabulations are equally characterised by the search for coherence beyond the evidence available to us. The gaps we want to fill in confabulation and superstition are explanatory gaps, and the explanations we tend towards are those that feature reasons. So the causal explanations we prefer are those that are psychological and mentalistic.

Ichino argued that superstitious thoughts are better interpreted as imaginings rather than beliefs—based on the view that they are not constrained by evidence and are responsive to our will; they are locally coherent and selectively integrated; and they can motivate action.

Finally, Ichino considered whether we can still talk about epistemic innocence if we think of superstitious thoughts as imaginings. She concluded that we can do that, as long as either we characterise the epistemic faults of superstitious thoughts as metacognitive errors (we do not realise that they are imaginings) or we come up with epistemic norms that apply to imaginings and identify where the faults might be (not all stories are equally good).

Before taking up my current post as Lecturer in Philosophy, I was a Postdoc on Lisa Bortolotti’s AHRC project on the Epistemic Innocence of Imperfect Cognitions (2013-14). In that year we worked together in developing the notion of epistemic innocence, which we thought could be of use in thinking about the epistemic status of faulty cognitions. We understood a cognition as epistemically innocent when it (1) endows some significant epistemic benefit onto the subject (Epistemic Benefit Condition), which could not otherwise be had, because (2) alternative, less epistemically faulty cognitions are in some sense unavailable to her at that time (No Alternatives Condition).

As part of that project, we wrote two papers in which we put that notion to use in discussion of explanations of actions guided by implicit bias (Sullivan-Bissett 2015) and motivated delusions (Bortolotti 2015). Since then, a lot of work has been published which appeals to this notion, in particular, in discussions of delusions in schizophrenia (Bortolotti 2015), psychedelic states (Letheby 2015), social cognition (Puddifoot 2017), clinical memory distortions (Bortolotti and Sullivan-Bissett forthcoming), and false memory beliefs (Puddifoot and Bortolotti forthcoming).

In my paper I take a slightly different approach. I do not seek to extend the concept of epistemic innocence to monothematic delusions, I rather ask to whom would it matter if such states were epistemically innocent. In particular, if we find that monothematic delusions are (at least sometimes) good candidates for the status of epistemic innocence, to which theorists of monothematic delusion would this claim be open to? I focus on the debate on monothematic delusion formation, in particular, that between one- and two-factor empiricists. I argue for the rather surprising conclusion that a judgement of epistemic innocence is licensed by both of these types of theory (albeit via different routes). Thus we find in the notion of epistemic innocence a unifying feature of monothematic delusions.