Subscribe

May 22, 2011

David Eagleman on Morality and the Brain

Should recent discoveries about the brain change our attitude to moral responsibility and culpability. In this episode of the Philosophy Bites podcast neuroscientist David Eagleman argues that it should.

Comments

About two minutes in, Dr. Eagleman follows up his mention of the Phineas Gage case with a reference to a "paper reported just a few years ago" that suggests a connection between one individual's pedophilia and his frontal lobe tumor. I would like to respectfully request a more detailed reference, so that readers (like myself) who are not as familiar with the medical literature on the subject can examine the case for ourselves.

Given the potential implications of such a connection, this paper deserves a very close and critical examination. Presently, I'm left wondering what other factors may have contributed to the change in this individuals behavior. Is it possible that the shame of being found-out, or fear of getting caught again, contributed to the change in behavior? How drastic were the changes? How can the researchers be confident that he hasn't just gotten better at hiding, or repressing, his behavior?

Without satisfactory answers to these sorts of questions, the claim that his tumor was directing his behavior, and the inference that he ought not to be judged as though he was acting freely, seems to beg the question. Why explain his behavior in terms of biological determinism rather than in terms of free will?

I'm not saying there aren't satisfactory answers to these questions, nor do I expect Dr Eagleman to deal with all of them in a 12 minute interview. But I hope that you will consider posting a more detailed reference to help those of us who are interested in the topic learn more about the medical literature.

I felt that the person overstated the importance of neuroscience for ethics. As is common with scientists not very familiar with philosophy, they tend to do that. Often claims of relevance or significance of findings in neuroscience to ethical issues are either trivial or outright false.

I'm no philosopher, but I have to agree with NChen that there was nothing new here regarding the free will/determinism debate. If one agrees that every event has a cause, then neuroscience is simply irrelevant. Obviously genetics and/or environment ultimately cause thoughts and behavior, because that's all there is; and neither are internal. Discovery of the neural processes that convert the external influences to thought and action are scientifically interesting but philosophically unimportant.

See, this is why most scientists make rotten philosophers, and even worse moral philosophers. It is easy to see what will happen to civilized society if scientism and scientists are allowed to dictate our anthropology and morality, unchecked and unregulated by philosophers and theologians. The prospects are frightening.

Dr Eagleman makes a fuzzy reference to the study of the man who blamed his paedophilia tendencies on his frontal lobe tumour. Anyone with knowledge about frontal lobe tumours and frontal lobe syndrome will know that pathology in this area causes disinhibition and accentuation of previously present tendencies. What we are not told is what this man was like pre-morbidly - isn't it possible that the man already has paedophilia tendencies prior to his tumour and that the ensuing frontal lobe syndrome merely disinhibited him to the point where his tendencies became simply more obvious and more brazen? We do not know, but for Dr Eagleman to assume that the paeophilia tendencies in this man could simply be reduced on this basis to a neurobiological anomaly is a hugely unwarranted leap in assumption. And I expect his enthusiasm for this model would be sorely tested if it was his own child who was subjected to sexual abuse from this man.

Of course in the legal system, it is not simply "a brain in front of the bench" (a pathetic example of reductive neuroscientist hubris). It is a WHOLE PERSON in front of the bench - Alva Noe (http://www.amazon.com/Out-Our-Heads-Lessons-Consciousness/dp/0809074656)and many others remind us that while the brain is a necessary element of what make a conscious being, it is certainly not a sufficient element in our total understanding of sentience and personhood.

Nancy Murphy has an important contribution to make to this discussion in her book "Did My Neurons Make Me Do It?":

I am not overplaying this for effect. If one of my students turned in a paper about free will, moral judgment or moral theory more generally making the kind of arguments that David Eagleman did, I would demand a rewrite, on the assumption that the student did not do any of the reading or pay attention in class at all. This isn't philosophy. These are musings by someone who thinks he knows something about philosophy, when really what he knows about (I assume) is neuroscience.

1) He uncritically accepts not just a libertarian conception of what free will is, but seemingly an incredibly strong one. He certainly assumes 'If I have no choice about X and X causes Y, I have no choice about Y.' Anyone with a passing familiarity with the philosophical debate on these issues knows that this is an extremely controversial assumption. I think most philosophers want to deny this claim. I am fine with David Eagleman endorsing it, but I would think a philosophy discussion would involve a defense of that principle, not the drawing out of obvious implications of the conjunction of this principle and the claim that most of our behavior is caused by things outside of our control. I say he seems to endorse an incredibly strong libertarian account, because he seems to think that culpability requires that one not be subject to any forces outside of one's control. He seems to leave open the possibility of free will (by which I assume he means the ability to do otherwise) but says that it is constrained by the neurological mechanisms and can't do that much. Well this is something that most contemporary libertarians accept. It seems to me that much of the best libertarian work, by people like Randy Clarke, Tim OConnor and Robert Kane are all about how constrained libertarian agency can be by mental states over which we lack direct control, while remaining morally responsible. David Eagleman seems to simply assume that it is either a completely unconstrained libertarian agency, or hard determinism. This is sophomoric. I do not mean that as an insult, I simply mean to say it is a thought that is characteristic of people taking their first metaphysics survey class, usually in their sophomore year of college.

2. There is a completely undefended assumption of consequentialism. Now I am a fan of consequentialism myself, but philosophy is about defending or attacking consequentialism, not simply asserting it.

3. He seems to have no idea what virtue is, or at least what most people have taken it to be throughout the history of ideas. He gives a definition of virtue which seems a lot more like Aristotle's account of the temperate, as opposed to the virtuous man, while treating the original view about virtue in ethical theory as clearly false. Again this is characteristic of students with no background in philosophy.

4. He seems deeply confused about personal identity and the philosophy of mind. He keeps contrasting us and our brain processes. That makes good sense if you are a dualist. But if you are dualist you likely don't think that brain processes are all that is going in your making decisions and processing information. If you are a physicalist of some variety, you need to be more careful when you saying things like 'That wasn't him, it was his brain' or 'He shouldn't be blamed, his brain did it.' He seems to be accepting physicalism at some points and implicitly rejecting it at others. This kind of moving back and forth without being aware of it is exactly the kind of mistake that philosophy classes help prevent.

I do not understand why he was invited to give this talk on a podcast called 'philosophy bites'. I understand that there is not much time for the interviews, but when actual philosophers (people with at least degrees in the field, and hopefully positions in academia) are on here they at least show that they understand what philosophy is about. David Eagleman didn't show that.

Having read all of the above comments I wonder what the natural philosophers of the 16th century and earlier thought of the first scientists? Was natural philosophy ruined by physicists and chemists or simply made irrelevant? Why are we to think that moral philosophy won't follow the same course as natural philosophy with philosophers being replaced with scientists?

Joshua, the first scientists were natural philosophers. This wasn't the replacement of one discipline by another it was the development by one part of a discipline in a very productive direction. But the great minds of science in the Enlightenment and pre-Enlightenment were very familiar with the history of philosophy and what contemporary philosophers were saying, because they were contemporary philosophers.

As to your more substantive question, I think for science to replace philosophy entirely it would have to answer questions that philosophers are trying to answer, and do so better than philosophers. And to do this the scientists would have to actually be asking the questions that philosophers ask. And by and large they don't. Eagleman is a great example. He simply assumes the answers to the philosophical questions and moves on. And that is fine. It is exactly what I would want people in the legal and scientific communities to do, work out the consequences of accepting some philosophical theory or other using the empirical knowledge that they have and I lack. But the kind of stuff Eagleman is doing can't replace philosophy, because it isn't even on the same subject as contemporary philosophy. He is asking questions that, by and large, philosophers have admitted they are not qualified to answer, because they are primarily empirical questions. You cannot replace questions like 'Should consequences be all that matters when it comes to punishment?' with questions like 'Given that consequences are all that ought to matter, how should we punish this person?' You can simply stop asking the first question and move on to the second. So while people might stop doing philosophy, unless scientists change the questions they seem prepared to ask, I don't see science replacing it anytime soon.*

If all you meant was 'Will philosophy departments stop being funded, with the savings being directed towards the hard sciences?' I don't know. Maybe. I think it is also possible that art departments will be eliminated, that English departments will be pressured towards teaching technical writing and ESL classes more than literature and creative writing classes, and that foreign language classes will be taught primarily through the auspices of the military and State Department. There are all sorts of nasty directions that higher education could take. If you meant to be asking more than this, if you meant to be asking whether philosophy would die and whether this might be a good thing, then I think you should spend a little time saying why you think the questions philosophers are asking aren't worth asking. But be careful, because in the process of doing so you are probably going to end up doing some philosophy, and it might be better to be trained in the activity before you try to do it in public.

*Disclaimer: I really shouldn't be talking about the questions that all scientists ask, because I have no idea what questions they all ask. What I have in mind are the questions that scientists who explicitly claim to be doing philosophy tend to ask. People like Eagleman and Sam Harris ask questions that take for granted answers to the interesting philosophical questions, and so I don't see replacement of what philosophers do by what they do as at all in the cards, for the same reason that I don't think undergraduates should be teaching philosophy courses. Harris and Eagleman have amateur level knowledge of the field (or so it seems from their public remarks).

Now on the other hand, I am reasonably sure that there are questions of great philosophical interest being asked in theoretical physics, theoretical biology, and social science departments. I am also reasonably sure that the people asking these questions (i.e. scientists) are better situated than most philosophers to answer these questions, because familiarity with the empirical research is a necessary condition of really understand the questions and lots of philosophers lack this empirical knowledge (and that is a pity). But I also am quite sure that these genuinely philosophical questions are about how to interpret the evidence, which theory, among a range of empirical and observationally adequate theory to choose, given that the evidence underdetermines theories in lots of cases. And I think some familiarity with philosophy of science, epistemology, and metaphysics (especially the history of those fields) is useful in answering these questions.

Patrick, thank you for your reply to my inquiries. As a follow up, would you care to comment on the success of philosophers, in this case let us limit philosophers to moral philosophers, in answering the questions posed by moral philosophers? How does this compare to the success of scientists in the physical sciences of physics and chemistry in answering their questions? Is one valid measuring stick of the success of scientists in these areas the progress that has occurred in technology, that such advances in technology are only possible with our ever improving understanding of the physical world? That is to say, better and better answers to the questions of science.

So you seem to be relying on an instrumentalist view of the value of science. This is something I reject. It seems to me that the great successes of science are not technological advances and isn't to be cashed out in terms of technological advances. It seems to me the great success of biology is the theory of evolution. Does the theory of evolution play a role in technological advancements? Yeah. You find out that one breed of snake is a recent offshoot of another species and you might be given clues about the range of venoms the first species of snake produces, and so have some idea of what range of antidotes to use. I am sure there are all sorts of other such uses for evolution. But those don't seem in the least, to men, to explain why the theory of evolution is a major intellectual triumph. It seems to me that it is the fact that it explains something very important that accounts for its status. I would say something similar about physics. Quantum mechanics is impressive, not because of its potential uses, but because it gets us closer to a fundamental account of what matter and energy are.

So you could measure the value of science by its technological feats, but this strikes me as confused. Why focus on the effect of what is important (explanations and accurate descriptions of the world) rather than the thing you care about? Perhaps you mean to be endorsing the view that it is only by their potential technological implications that we can judge between adequate and inadequate theories. This strikes me as just false. There are facts about what black holes are like, and there is bound to be a theory that adequately describes those facts, and that theory is the right one, whether or not we ever develop any way to make use of its insights. And it seems pretty clear that actual scientists don't use this as their test for theoretical adequacy. They use tests like observational adequacy, simplicity, projectability, etc. So I am not going to make technological usefulness the test for scientific success anymore than I am for ethics.

Science is successful because it offers better and better explanations of how the world works. Theories get better and better in the ways I just mentioned, and lots others. I think the same standards apply to ethics. Ethical theories are better to the extent that they better explain and account for ethical facts. What are the ethical facts? Well I think commonly held intuitive judgments have a good, though defeasible, claim to being among the facts under consideration (they seem to me to play the same role as observations on this score. Observations are pro tanto reasons to accept or reject a theory. They provide defeasible warrant). Along with explanations of such facts ethical theories need to be simple, not make arbitrary distinctions, not appeal to empirical claims which are false, etc.

And considered that way I think ethical theory has improved quite a bit over the history of philosophy. I think, for example, that philosophical ethics has improved a great deal in distinguishing carefully between different options. Consider the difference between Phillipa Foot's views on ethics and Julia Driver's or Tom Hurka's. All three give virtue an important theoretical role, just as Aristotle (who each can claim as an intellectual ancestor) did. But the distinction between Foot's theory, on which virtue plays a foundational explanatory role and Driver's/Hurka's on which it does not, is now well understood. It is not at all clear to me that this distinction was well understood in the ancient period. Philosophers have managed to provide accounts of ethics that do not depend at all on any theistic assumptions. This marks an improvement over where ethical theory seemed to be prior to, for example, Plato, and an improvement of where it seemed to get back to in the Middle Ages. Philosophical ethics has incorporated the results of mathematics, logic and the hard sciences in a way that sort of guarantees that there has been progress, because of the progress in those fields.

Now you might say, there is as much disagreement as ever. But unless you think that convergence among investigators is the test for the adequacy of a scientific theory (something that I also believe is confused), this won't matter much. And even if it did, it would put philosophy in the same boat as the sciences, since on a wide range of issues there are significant disagreement between scientists. How did hadrons pick up mass? No agreement there. Is there a theory which unifies general relativity and the standard model of quantum mechanics? Again there seems to be no agreement. To what extent are social/cultural habits in animals the result of biological evolution? I take it that there is still a big disagreement (witness the continued fights among biologists over the status of evolutionary psychology and evolutionary sociobiology). People overstate the unanimity of science. This lack of unanimity isn't a strike against it, it is simply a manifestation of how hard some of the questions are.

Before I finish I want to say one other thing about unanimity and convergence. I think it is very likely that ethics will never achieve convergence, while some scientific disciplines have achieved convergence on a wide range of issues. I think that ethics will never achieve convergence for an essentially political reason. There are many people made powerful by their ability to convince large numbers of people to hold one ethical theory rather than another. As long as this is true there are going to be barriers to convergence among moral philosophers, because there will always be pressure to keep some ethical theories alive, and as long as the theories are out there in the culture, someone will decide to make a career out of defending it in academic writing. But this cause of a lack of convergence bothers me not at all (well the cause of it bothers me, but the presence of disagreement does not). Again, the test of truth is not the silence it causes.

Thank you again Patrick for your insights. If I understand you correctly I agree that the underlying value of progress in science is moving closer to the true nature of the physical. I used the word technology to represent the experimental validation of these improving scientific theories. Einstein's theories are better than Newton's because they, "gets us closer to a fundamental account of what matter and energy are.". This is demonstrably true for what Einstein's theories allow us to do that Newton's did not, if we didn't have such a means test than I would think that science and philosophy would be exactly the same in this regard. You noted the theories on ethics for Foot, Driver and Hurka and how the distinctions between their theories are now well understood and that this is progress, but as I see it the progress of Newton to Einstein was not an understanding of the distinctions between the two theories, much more it was that Einstein's theories improved upon our understanding the what matter and energy are. How are we to say that one theory of ethics is an improvement over another? You brought up the issue of politics and convergence. The difference between science and philosophy couldn't be more clear than with the space race between the USA and the USSR during the 1960-70s. The two countries had such different political and social systems that were based on differing philosophical theories, but even with such massive differences both countries scientists and engineers applied the same scientific theories to engineer better and better technology to put a man on the moon. Yes, we do have disagreements in science, like we had with hand washing, where it took decades for people to agree, BUT we do have many examples of disagreements being put to rest, I see no such cases with ethical philosophy. You clearly are well educated and capable of making powerfully persuasive philosophical arguments and if such powers were the means test in science like they are in philosophy I wonder if doctors would still be smoking cigars during surgery?

Patrick, in your first comment you point out a lot of seemingly bad assumptions made by Eagleman, but I think they can all be defused when you realize he is attacking the folk moral psychology that underlies our daily activities and major institutions.

You point out that the free will he criticizes is strongly libertarian (incompatibilist), strongly dualist, and views virtue through the lens of popular Christian notions of self-control, rather than sophisticated Aristotelian ideas.

But this is how most people view things--I see all these assumptions in my students every day--and this is the view he is rejecting. It is true that he doesn't have much new evidence against it. Modern neuroscience mostly just piles on new data for old arguments, for instance by giving us more Phineas Gage type cases. Still, he's basically right. This folk conception doesn't stand up in the face of ever-mounting evidence. The consequentialist approach to punishment is he proposing is unargued for, but comes pretty naturally if you are interested in changing people's folk understanding in a way that they will accept and actually make life better for people.

He may not address sophisticated versions of compatibilism, or Aristotole, or modern reductionist philosophy of mind. But in the larger picture, how important is that?