Monday, 31 March 2014

I think it would be helpful if at this stage the distinction between monothematic and polythematic delusions were introduced. A monothematic delusional condition is one where the patient has only a single delusional belief (or a small set of delusional beliefs related to a single theme). A polythematic delusional condition is one where the patient has many different and unrelated delusional beliefs. This distinction between monothematic and polythematic delusion was offered by Davies, Coltheart, Langdon and Breen (2001) and has been discussed by the philosopher Jennifer Radden in her book (Radden, 2010).

As Coltheart (2013) noted “Well-known cases of polythematic delusion include Daniel Schreber, a judge in the German Supreme Court, who believed that he had the plague, that his brain was softening, that divine forces were preparing him for a sexual union with God, and that this would create a new race of humans who would restore the world to a lost state of blessedness (Bell 2003); more details of his case are provided on pp. 50-52 of Radden’s book. Another example was the Nobel laureate John Nash (who was diagnosed with schizophrenia); among the delusional beliefs he held were that he would become Emperor of Antarctica, that he was the left foot of God on Earth, and that his name was really Johann von Nassau (Capps 2004).”

Thursday, 27 March 2014

On 17-18th March, The New Insights and Directions for Religious Epistemology project at the University of Oxford hosted a workshop on defeat and religious epistemology. Papers were given by Charity Anderson (Oxford), J. Adam Carter (Edinburgh), Maria Lasonon-Aarnio (Michigan), John Pittard (Yale Divinity School), Edward Wierenga (Rochester) and Michael Bergmann (Purdue).The workshop began with a discussion of Anderson’s paper on Defeat, Testimony and Miracles. Anderson considered the rationality of believing in a miracle report in Hume’s infamous essay of Miracles. Anderson discussed the role epistemic defeat plays in Hume’s argument and she claimed that Hume’s central claim is not, as is often thought, that testimony is a weak source of knowledge, but rather, that some kinds of testimony, namely testimony to the miraculous, are unreliable.

Wednesday, 26 March 2014

In this post I will suggest some reasons for thinking that at least some beliefs based on implicit bias are epistemically innocent. An implicit bias is a bias ‘of which we are not aware […] and can clash with our professed beliefs about members of social groups’, and which can ‘affect our judgments and decisions’ (Crouch 2012: 7). Empirical work has shown that such biases are held by ‘most people’, even those people who avow egalitarian positions, or are members of the targeted group (Steinpreis et al. 1999).

As Lisa and I have said in previous posts, we understand the epistemic innocence of a cognition as that cognition's meeting two conditions. Here are the conditions a belief based on implicit bias would have to meet in order to be epistemically innocent:

1. Epistemic Benefit: The belief delivers some significant epistemic benefit to an agent at a time (e.g., it contributes to the acquisition, retention or good use of true beliefs of importance to that agent).

2. No Relevant Alternatives. Alternative beliefs that would deliver the same epistemic benefit are unavailable to the agent at that time.

Tuesday, 25 March 2014

This is a response to Max Coltheart's contribution to the blog, posted on behalf of Phil Corlett.
Thank you Max. Your responses are enlightening. I do have a number of follow-up questions, if I may.

Follow up to Q 1 – If prediction error is intact in people with delusions, how would we observe the patterns of prediction error disruption in our data? These patterns have been consistent across endogenous (Corlett et al., 2007) and drug induced delusions (Corlett et al., 2006) as well as in healthy people with delusion-like ideas (beliefs in telekinesis for example) (Corlett & Fletcher, 2012). Importantly, these neural responses (aberrant prediction errors) correlated with delusion severity across subjects in these studies.On the other hand, if prediction error must be intact for 2-factor theory, do our data suggest that 2-factor theory not apply to delusions that occur in schizophreniform illnesses?

Sunday, 23 March 2014

I thank Phil for his illuminating questions about my post, and will attempt to answer them, in Q&A format (Q is Phil, A is me).

Q: Are you aligning prediction error with Factor 1 or Factor 2? It seems Factor 1, but I wanted to check – particularly since you align Factor 2 with the functioning of right dorsolateral prefrontal cortex, which, as you know, we’ve implicated in prediction error signaling with our functional imaging studies.

A: In our account, what generates prediction error is Factor 1: for example, in Capgras delusion, the prediction is that an autonomic response will occur when the face of one’s spouse is seem, but that prediction is in error, since the predicted response does not occur. But detection of this prediction error would only occur if the system that detects such errors is intact. And the job of this system is to generate hypotheses to account for these errors: a delusional hypothesis would not occur unless this function of the prediction-error system were also intact. Our understanding of your model is that there is something wrong with the prediction-error system in people with delusions. As for right dorsolateral prefrontal cortex, we associate Factor 2 with this region, believing that damage to the region results in impairment of the belief evaluation system.

Thursday, 20 March 2014

On 17 March 2014 Kengo Miyazono organised a public engagement event as part of the Arts and Science Festival at the University of Birmingham on 17 March 2014. The main theme of the event was a reflection on the importance of psychiatric diagnosis in establishing whether someone is responsible for committing a crime.

The event consisted of several activities, a talk by Matthew Broome (Psychiatry, Oxford) on a case study he had written about, featuring a man with schizophrenia committing a crime; a brief commentary by Lisa Bortolotti (Philosophy, Birmingham) explaining how the considerations made about the case study could enlighten the debate on the recent Breivik case in Norway; a questions and answers session with the audience; and a discussion session for which the audience split in two groups. Doctoral students Sarah-Louise Johnson, Rachel Gunn and Ben Costello also contributed to the discussion. Additional activities are planned this week on the event website, Mental Illness: Philosophy, Ethics and Society.

Tuesday, 18 March 2014

My name is Richard Dub. I'm currently a postdoctoral fellow at the Swiss Centre for the Affective Sciences, and I recently received my Ph.D. in Philosophy from Rutgers University. In my dissertation, I offered a model of delusions that attempted to answer two questions: What are delusions? How are they formed? Delusions, I argue, are pathological acceptances formed on the basis of pathological cognitive feelings.Neither 'acceptance' nor 'cognitive feeling' is an entirely mainstream
concept. A concern that motivates a lot of my work is that it is
procrustean to try to explain all mental phenomena in terms of a select
few propositional attitudes. There is
little reason to insist that belief and desire must take their
traditional place of prominence. The mind is lush, not sparse. The ordinary concept of belief is
likely what Ned Block calls a "mongrel concept": a concept that
imperfectly picks out various dissimilar cognitive states.

Saturday, 15 March 2014

I am curious about a couple of things.First, you say that the prediction error
signal fails in our model. Are you implying that we believe delusions form in
the absence of prediction error? Our data point to the opposite case.
Prediction errors are inappropriately engaged in response to events that ought
not to be surprising. That is why people with delusions learn about events
(stimuli, thoughts, percepts) that those without delusions would ignore.

Second, you claim that prediction error is
key to belief updating in your model. Are you aligning prediction error with
Factor 1 or Factor 2? It seems Factor 1, but I wanted to check – particularly
since you align Factor 2 with the functioning of right dorsolateral prefrontal
cortex, which, as you know, we’ve implicated in prediction error signaling with
our functional imaging studies.

Thursday, 13 March 2014

On 12th March, the Department of Philosophy at
the University of Birmingham held a workshop on Belief, as part of the Birmingham workshops in
philosophy series. Papers were given by Scott Sturgeon (Birmingham),
Rae Langton (Cambridge),
and Susanna Siegel (Birmingham/Harvard). Sturgeon opened the workshop with his paper ‘Epistemic
Attitudes: the Tale of Bella and Creda’. He was interested in three questions.
First: which are our epistemic attitudes? Second: Do elements of a given
attitudinal space reduce to others in that space? Third: Do elements of a given
attitudinal space reduce to others in a different space? Focus was on this
third question, more specifically: the Belief-first view (credences reduce to
beliefs) versus the Credence-first view (beliefs reduce to credences).

Tuesday, 11 March 2014

I'm Max Coltheart, Emeritus Professor of Cognitive Science
at the Centre for Cognition and its Disorders and Department of Cognitive
Science, at Macquarie University. I work in cognitive neuropsychology
(especially developmental dyslexia) and cognitive neuropsychiatry (especially
delusional belief). I am also interested in the use of functional neuroimaging
to attempt to test cognitive theories. I did a joint undergraduate degree in
psychology and philosophy at the University of Sydney and have worked with the
philosophers Martin Davies, John Sutton and Peter Menzies.Since the late 1990s my colleague Robyn Langdon and I,
initially in collaboration with Martin Davies and the clinical
neuropsychologist Nora Breen, and later in collaboration with various others
especially Ryan McKay, have been developing a cognitive-level theory of the
genesis of delusions which we call the two-factor theory. Our view is that
scientific understanding of any delusional condition has been achieved if the answers
to two questions has been discovered. Question 1 is: what caused the idea that
is the content of the delusion to first come to the deluded person’s mind?
Question 2 is: what caused this idea – a candidate for belief - to be accepted
as a belief, rather than rejected, which is what ought to have happened because
there is so much evidence against it?

Thursday, 6 March 2014

On May 8th and 9th there will be a two-day workshop at the University of Birmingham, discussing some of the key themes in the Epistemic Innocence project.

The workshop will be one of the means by which the project interim results are disseminated, and will promote an exchange between philosophers and psychologists on the potential pragmatic and epistemic benefits and costs of beliefs, memories, implicit biases, and explanations. It will also be a venue for Imperfect Cognitions network members to meet and talk about their research, and think about potential areas for future collaboration.

The workshop is funded by the Arts and Humanities Research Council, and has also received the support of the Analysis Trust. This made it possible to subsidise the registration fee, and award workshop bursaries to the graduate students attending. Abstracts for the talks can be found here. If you wish to participate, register by 15th March 2014 or follow live-tweeting by @epistinnocence.

Many philosophers now believe that the self is in some way constructed
by narrative; through socio-linguistically mediated story-telling, we achieve diachronic
unity by taking a reflective stance on our experiences. According to the strong
formulation of this thesis, conscious beings only develop selves once they acquire
the higher-order linguistic and reflective capacities required for
autobiographical self-understanding.