Anibal and I are having a lively exchange in the comments to Peter's post below on the forthcoming paper on "Voodoo Correlations in Social Neuroscience."

Because Adam has not yet forced me to give the keys back to his blog, I figured it might be easier for interested parties to see if I moved it to an actual post.

Regarding Peter's post, I commented:

This is fascinating, thanks for the links. A couple of points leap to mind: first, while this is a sophisticated critique which engages the scholarship on its own (quantitative) terms, there are to my mind some compelling theoretical and conceptual reasons for doubting many of the conclusions drawn by both scientists and lay people alike. (I'm thinking of Bennett and Hacker's notion of the mereological fallacy, Searle's critique of eliminative materialism, Pustilnik's trenchant analysis of the role neuroscience has played in criminal law, Racine, Bar-Ilan, and Illes's articles on neurofallacies and neurorealism, etc.)

These are, of course, the neurofallacies, but what I think is most interesting about the Vul et al. critique is that it essentially states that scientists rather than just laypersons and journalists have perpetrated these fallacies.

This leads to my second point: why is it that so many, scientists, reporters, and laypersons alike, seem to push these fallacies so hard? What is at stake? Why is it important? The SciAm article with Vul touches on it, but I want to suggest that these are actually crucial questions, that tell us a great deal about the culture of science and its role in American society. Very recent data documenting the large amounts of waste that are attributable to medical imaging is quite compelling, in my view, and Vul et al's paper lends even more urgency to the questions of what it is about neuroimaging that seems so compelling, that drives so many to make the claims Vul and others critique.

In response, Anibal observed that

On the theoretical and conceptual side is also legitimate to say that not all agree with the mereological fallacy and other debunking statements about the sole and only matter of fact about neuroscience: the mind is what the brain does.

I responded:

Certainly it is true that not all agree with the mereological fallacy, nor did I state as such. I happen to think they are quite right, however, and if so, that has serious implications for the conceptual coherence of neuroimaging techniques.

Moreover, part of my argument above is to note that I do not remotely agree with the vast majority of neuroscientists that "the mind is [merely] what the brain does" (this is not just my impression -- there is literature demonstrating that this is exactly what neuroscientists do in fact believe). I added the "merely" to emphasize Searle's point, that of course the brain is a sine qua non for mind, in the sense that without a brain, there is no mind. The mistake that most interlocutors make, IMO, is to infer that mind is therefore nothing but/reducible to brain. That is an extremely serious error, one that is pervasive in neuroscience. Naturally, most neuroscientists do not think this is an error; it is in many ways one of the conceptual foundations of the entire field.

However, I have no problem whatsoever in stating that I think those who believe that the mind is reducible to the brain are badly mistaken.

Anibal queried:

there are a lot of epistemological and metaphysical positions about how to know the mind/brain, and what is the mind/brain... and i notice the gross error in believing blindly what neuroscience (a nascent field) tell us, as you did mentioning "neurorealism" and other tendencies to accept without critic neuroscientific data.

But if the mind is not reducible to the brain, how we can find valid machanistical explanations about our mental life.

Where the mental life is suppose to be grounded?

I think these are excellent questions, many of which I address in my forthcoming dissertation (Shameless Self-Promotion Alert). Here's what I would say to Anibal:

Why do we have to find "mechanistical" explanations about our mental life? That's the crucial (and exceedingly interesting) point, IMO. What does it mean if in fact there are aspects of our mental life that cannot be objectified, or categorized in mechanistic explanations, or quantitated? Why is this so difficult to imagine?

Of course, I hasten to add that I am not remotely ascribing to a notion of mentalism in which mental substances are simply floating in the ether. That's absurd; without physical brain, there is no mind. Searle's point is that a notion of subjectivity is also a requisite component of consciousness. This subjectivity is not reducible, is not categorizable in mechanistic terms, and it cannot be quantitated. Mental life is "grounded" in the brain, but is not reducible to it. Neuroscientists typically move from the premise, which is absolutely correct, IMO, to the conclusion, which is absolutely fallacious, IMO, without a moment's hesitation.

But if Searle is correct, why is this such a problem? It does not mean we cannot study consciousness in the powerful objectifying modalities utilized in neuroscience. It simply means that not all phenomena that we think are important to our lived experiences, to our consciousness, can be explained mechanistically. And I honestly fail to see any reason why this is a serious problem, other than that it significantly challenges some of the tacit notions in neuroscience that eventually, we can progress to the point where scientific modalities can explain everything and account for all of our lived experiences. This is an assumption, IMO, and one that typically goes unexamined. Science, and especially the science of the brain, is the great social legitimizer of the 20th and 21st century. But the possibility that it cannot explain everything really should not be threatening, for it hardly undermines the significant power of such science. It just means that not everything we might like to know about our mental life is susceptible to objectification, mechanistic explanation, and quantitation.

In my initial posting on this topic (in the comment section of one of Adam's posts), I suggested that there had already been sufficient (virtual and actual) ink devoted to this topic, and that I had nothing further to add. I still have nothing further to add but apparently I was wrong about the ink issue.

I won't go through the entire chronology, but I will tell you this. If you do a Google Blog search today using the term "Voodoo Correlations fMRI", you get 86 hits (and probably one more after I publish this post). At least one of the (now) historical hits is to our own Neuroethics & Law Blog, as Adam alerted us earlier this week that there was mention of these Voodoo correlations in the Wall Street Journal. Today, Scientific American has posted a long interview by science writer Jonah Lehrer with Ed Vul, the lead author on the (already) infamous (but still in press until September) paper on Voodoo Correlations in Social Neuroscience. [It is worth reading the invited reply by Lieberman et al. as well.]

As we gear up for the Stupor Bowl [sic], the newswires exploded with news of the latest study of chronic traumatic encephalopthy ("CTE") in a former professional football player. A bit of context here:

What if the risks of traumatic brain injury that stem from playing football are much closer to the risks of TBI from combat sports (like MMA, which I have personal experience with, or professional boxing)? The rules and regulations for the latter are quite strict regarding neurological trauma, as the career of Joe Mesi amply illustrates.

In contrast, there are no rules and regulations regarding the treatment of concussions in the NFL, or mild TBI ("mTBI"). The story here is extremely complicated, involving structural features of occupational medicine in the NFL, rampant conflicts of interest, and profound uncertainty regarding the sequelae of even a few concussions. Despite this uncertainty, the evidence is mounting that neuropathological effects of concussions are protracted (such that even when persons appear uninjured 15 minutes after a concussion, they may in fact suffer neural damage), and that severe long-term neuropathology can be caused by relatively few concussions (though no one knows how many, it is very likely less rather than more), among other recent findings.

The latest findings stem from the efforts of Chris Nowinski, a former Harvard football player and professional wrestler, who has devoted much of his time and energy to this particular issue. He co-founded the Sports Legacy Institute, which sponsored the latest study.

In any case, the NFL is clearly on the defensive on this issue, as, IMO, they ought to be.

For some reason, Adam was kind enough to permit me to borrow this blog for some shameless self-promotion, so I can say that readers interested in knowing more about this issue are invited to check out an article of mine recently published in HEC Forum:

"Concussions, Professional Sports, and Conflicts of Interest: Why the National Football League’s Current Policies are Bad for Its (Players’) Health," HEC Forum 20, no. 4 (2008): 337-355.

Those among you with no interest in this issue should still check out HEC Forum, as the issue is a theme issue on Clinical Neuroethics Consultation, guest edited by Paul Ford. Lots of articles in there that the readers of this blog will probably find interesting.

A student writes in with the query below. If you have suggestions, feel free to post them in the comments:

I'm an undergrad interested in developing technologies to protect the privacy of the mind. I study computer science, and am planning on attending medical school (or possibly MD/PHD or another combined program). I'm wondering if you know of any schools [where] such research is taking place, if you know of any other programs I might be interested in, or if you know a good starting point for my searches.

I highly recommend the article, "Responsibility and the Brain Sciences," by Felipe de Brigard et al. in the Dec. 24, 2008 issue of Ethical Theory and Moral Practice. Here is the abstract:

Abstract: Some theorists think that the more we get to know about the neural underpinnings of our behaviors, the less likely we will be to hold people responsible for their actions. This intuition has driven some to suspect that as neuroscience gains insight into the neurological causes of our actions, people will cease to view others as morally responsible for their actions, thus creating a troubling quandary for our legal system. This paper provides empirical evidence against such intuitions. Particularly, our studies of folk intuitions suggest that (1) when the causes of an action are described in neurological terms, they are not found to be any more exculpatory than when described in psychological terms, and (2) agents are not held fully responsible even for actions that are fully neurologically caused.

The WSJ article I recently referenced addresses several other neurolaw and neuroethics topics, including the following study by Pashler et al. This study, as Peter Reiner has pointed out, deserves more attention on the blog:

Last month, psychologist Harold Pashler at the University of California at San Diego and his colleagues examined the analytical techniques used in 54 peer-reviewed fMRI brain-scanning studies covering personality traits and attitudes. They included those emotions that might crop up in a criminal trial, including rejection, anxiety, romance, fear, erotic stirrings, virtue, humor and happiness. Half the studies relied on statistical measures of brain activity so poorly analyzed that the findings were worthless, especially when researchers were attempting to assess individual differences, Dr. Pashler reported in Perspectives on Psychological Science . Yet they all had been published in prominent scientific journals.

"In the law, individual differences are the main focus," says Dr. Pashler. "And it often could come down to these voodoo statistics."

The Wall Street Journal recently ran an article about the study by Owen Jones and others at Vandebilt on how we make punishment determinations:

"We take decision-making for granted, like breathing," says Vanderbilt law professor Owen Jones, who conducted the experiment with Vanderbilt neuroscientists Rene Marois and Joshua Buckholtz. "If you want a world in which judicial and jury decisions are fair, unbiased, sensible and reasonable, then we ought to understand a little bit about how it actually happens."

As a first step, they measured how our brain cells behave as we decide whether to punish someone accused of a crime when we have no personal stake in enforcement. The researchers tested 16 volunteers in a functional magnetic resonance imaging machine. The fMRI monitored the blood flow and oxygen demand associated with neural activity as each subject made two distinct legal judgments about blame and punishment in 50 hypothetical scenarios ranging from simple theft of a music CD to rape and murder.

No one part of the brain stands in judgment of others, they found. Instead, at least two areas of the brain assess guilt and assign an appropriate penalty. An area associated with analytical reasoning, called the right dorsolateral prefrontal cortex, became very active, they reported. But the decision process also electrified emotional circuits.

From the Oxford Uehiro Centre for Practical Ethics:

Announcement of Neuroethics Lectures: Walter Sinnott-Armstrong

Professor Walter Sinnott-Armstrong will be giving two Leverhulme lectures and a special ethics seminar on Neuroscience and Neuroethics at the University of Oxford as part of his Leverhulme Visiting Professorship programme 2008 - 10. Walter Sinnott-Armstrong is Professor of Philosophy and Hardy Professor of Legal Studies, at Dartmouth College, and is and Co-Director, MacArthur Law and Neuroscience Project. See extended post for full details of these lectures

Recently, lawyers have tried to introduce neuroscientific evidence into several different types of trial. This paper explores the question of whether and to what extent neuroscientific evidence can be provative of legal issues and whether and to what extent it can confuse or mislead jurors.

President Barack Obama has promised to relax the Bush administration's restrictions on federal financing for such research. But Obama's ascent to the White House had nothing to do with the U.S. Food and Drug Administration's granting permission for the new study, Okarma said in a telephone interview Thursday.

In fact, the company says, the project involves stem cells that were eligible for federal funding under Bush, although no federal money was used to develop the experimental treatment or to pay for the human study.

Other human cells, called adult stem cells, have been tested before in people to treat heart problems, for example.

In the Geron study, the injections will be made in the spine at the site of damage. The work will be done in four to seven medical centers around the country, Okarma said.

Animal studies suggest that once injected, the cells will mature and repair what is essentially a lack of insulation around damaged nerves, and also pump out substances that nerves need to function and grow.

Apart from assessing safety, investigators will hope to see some signs of improvement in the patient, Okarma said. The idea is "not to make somebody ... get up and dance the next day," he said, but rather to provide some level of ability that can be improved by physical therapy.

The Program in Ethics and Brain Sciences (PEBS) is a collaborative neuroethics effort of the Johns Hopkins Berman Institute of Bioethics and the Johns Hopkins Brain Sciences Institute. The primary goal of PEBS is to ensure that research in brain science proceeds with an informed understanding of ethical issues, and that philosophical and empirical analyses of the advances in brain research proceeds with an informed understanding of the science. The PEBS News Roundup is a bi-weekly collection of links to new neuroethics stories and papers.

Sally Satel has an article at Forbes, see here, on the overuse of neuroscience to explain human behavior. Here's the introduction:

How did so many smart people fall for Bernard Madoff's Ponzi scheme? According to the latest pop science, their brain chemistry made them do it. Interviewed about the Madoff mess on National Public Radio in December, Camelia Kuhnen, an assistant professor at the [Kellogg School of Management] and an expert in the new field of neurofinance, declared: "Potential reward will increase activation in the brain's reward center, which will make you take more financial risk."

What an odd explanation. I don't deny that the chemical dopamine surges in the brain when we anticipate pleasure. I'm saying that observations such as professor Kuhnen's tell us no more about behavior than we already know. Namely, that the promise of money makes investors take risks.

Greg Miller has written an interesting one-page article in Science (limited access here) describing the recent conference at Stanford Law School on the use of brain imaging to assess pain. You can access some more commentary about the conference, along with the audio recordings here.

I will also use this opportunity to address a couple of claims in the article attributed to me. The article states that I argued "that pain detection is more likely to be the first fMRI application to find widespread use in the courtroom, in part because the neuroscience of pain is better understood."

Let me clarify/expand: I don't know if pain detection will be the first fMRI application to find widespread use in the courtroom, but compared to brain-based lie detection technologies, I do think that brain-based methods of pain detection (including both functional and structural neuroimaging) have certain distinct advantages. First, unlike lies, which can be made in a fraction of a second, the kind of pain that would likely be relevant in the courtroom is likely to extend over long periods of time. That may make pain a little bit easier to reliably detect. That is, we're observing a phenomenon that doesn't require precise time resolution. Second, we already have some evidence of correlations between chronic pain conditions and structural changes in the brain. If these correlations can reliably be detected and if these structural changes are not observed in subjects lacking pain, then we may have a way of detecting certain kinds of malingering. Assuming that it is difficult to deliberately fake the pertinent structural changes in the brain, countermeasures to the technology will be harder to come by. By contrast, it seems likely that there will be a number of countermeasures one can use against functional MRI.

As for whether the neuroscience of pain is better understood than the neuroscience of lies, I'm not sure if that's true. I'm happy to defer to neuroscientists on the matter. I do know that I have asked many neuroscientists about the relative plausibility of brain-based lie detection relative to brain-based pain detection. I would say that most, but not all, seem to concur with my general sentiments. More interestingly, though, few neuroscientists so far seem sufficiently versed in both technologies to make the comparison.

The article also says that "Kolber estimates that pain is an issue in about half of all tort cases, which include personal injury cases." What I think I said (or at least meant to convey) is the claim that I mention in this article about pain imaging that "pain and suffering awards may represent about half of personal injury damage awards" (sources on p.434). This figure is just an estimate, but it does reinforce the central point that lots of money changes hands in the legal system over hard-to-verify claims about pain and other subjective experiences.

It has frequently been claimed that the folk are incompatibilists about freedom and moral responsibility – that they believe that freedom is not possible in a deterministic universe, and that if you are not free, you are not morally responsible.3 Thus Kane claims, "In my experience, most ordinary persons start out as natural incompatibilists."4 And Pereboom writes, "Beginning students typically recoil at the compatibilist response to the problem of moral responsibility." 5 Of course, other philosophers have suggested that the common view is actually compatibilist.6

The nature of our intuitions about free will and moral responsibility is not, however, purely a matter of a priori debate. What the folk think is an empirical question, and one which can be addressed by what is coming to be known as "experimental philosophy". Recently, experimental philosophical approaches have yielded conflicting results as to whether the folk are, in general, compatibilists or incompatibilists about moral responsibility. Nahmias and colleagues had subjects assume that determinism is true, and then judge whether an agent is blameworthy under those circumstances. They found that subjects tended to say that the agent was blameworthy.7 Using a different experimental design, Nichols & Knobe (2007) presented subjects with a description of an alternate universe that is deterministic, and they found that subjects tended to say that agents were not responsible in that universe.8

This pattern of conflicting results suggests that subtle features about the way questions about moral responsibility are framed may have an effect upon our intuitions. There are a number of differences between the experiments done by Nahmias et al. and those done by Nichols and Knobe. But for present purposes, we are interested in just one of these differences. In some of the experiments by Nahmias et al., the scenario was depicted as holding of our own world, whereas in the experiments by Nichols & Knobe, the scenarios were always set in an alternate universe. For our experiment here, we wanted to see what would happen if we posed questions that differ only in whether the hypothetical situation was set in our own universe or another. On the face of it, one would expect that responses to a hypothetical situation would depend solely upon features of that situation, and not upon the subject’s relation to that situation. Thus, one might expect that responses to questions about freedom and responsibility would be independent of whether those questions were couched in terms of our universe, or another universe just like ours. In the following experiment, we tested this expectation.