Friday, June 01, 2018

I've done a lot of empirical work on the apparently meager practical effects of studying philosophical ethics. Although most philosophers seem to view my work either neutrally or positively, or have concerns about the empirical details of this or that study, others react quite negatively to the whole project, more or less in principle.

About a month ago on Facebook, Samuel Rickless did such a nice job articulating some general concerns (see his comment on this public post) that I thought I'd quote his comments here and share some of my reactions.

First, My Research:

* In a series of studies published from 2009 to 2014, mostly in collaboration with Joshua Rust (and summarized here), I've empirically explored the moral behavior of ethics professors. As far as I know, no one else had ever systematically examined this question. Across 17 measures of (arguably) moral behavior, ranging from rates of charitable donation to staying in contact with one's mother to vegetarianism to littering to responding to student emails to peer ratings of overall moral behavior, I have found not a single main measure on which ethicists appeared to act morally better than comparison groups of other professors; nor do they appear to behave better overall when the data are merged meta-analytically. (Caveat: on some secondary measures we found ethicists to behave better. However, on other measures we found them to behave worse, with no clearly interpretable overall pattern.)

* In a pair of studies with Fiery Cushman, published in 2012 and 2015, I've found that philosophers, including professional ethicists, seem to be no less susceptible than non-philosophers to apparently irrational order effects and framing effects in their evaluation of moral dilemmas.

* More recently, I've turned my attention to philosophical pedagogy. In an unpublished critical review from 2013, I found little good empirical evidence that business ethics or medical ethics instruction has any practical effect on student behavior. I have been following up with some empirical research of my own with several different collaborators. None of it is complete yet, but preliminary results tend to confirm the lack of practical effect, except perhaps when there's the right kind of narrative or emotional engagement. On grounds of armchair plausibility, I tend to favor multi-causal, canceling explanations over the view that philosophical reflection is simply inert (contra Jon Haidt); thus I'm inclined to explore how backfire effects might on average tend to cancel positive effects. It was a post on the possible backfire effects of teaching ethics that prompted Rickless's comment.

Rickless: And I’ll be honest, Eric, all this stuff about how unethical ethicists are, and how counterproductive their courses might be, really bothers me. It’s not that I think that ethics courses can’t be improved or that all ethicists are wonderful people. But please understand that the takeaway from this kind of research and speculation, as it will likely be processed by journalists and others who may well pick up and run with it, will be that philosophers are shits whose courses turn their students into shits. And this may lead to the defunding of philosophy, the removal of ethics courses from business school, and, to my mind, a host of other consequences that are almost certainly far worse than the ills that you are looking to prevent.

Schwitzgebel: Samuel, I understand that concern. You might be right about the effects. However, I also think that if it is correct that ethics classes as standardly taught have little of the positive effect that some administrators and students hope for from them, we as a society should know that. It should be explored in a rigorous way. On the possibly bright side, a new dimension of my research is starting to examine conditions under which teaching does have a positive measurable effect on real-world behavior. I am hopeful that understanding that better will lead us to teach better.

Rickless: In theory, what you say about knowing that courses have little or no positive effect makes sense. But in practice, I have the following concerns.

First, no set of studies could possibly measure all the positive and negative effects of teaching ethics this way or that way. You just can’t control all the potentially relevant variables, in part because you don’t know what all the potentially relevant variables are, in part because you can’t fix all the parameters with only one parameter allowed to vary.

Second, you need to be thinking very seriously about whether your own motives (particularly motives related to bursting bubbles and countering conventional wisdom) are playing a role in your research, because those motives can have unseen effects on the way that research is conducted, as well as the conclusions drawn from it. I am not imputing bad motives to you. Far from it, and quite the opposite. But I think that all researchers, myself included, want their research to be striking and interesting, sometimes surprising.

Third, the tendency of researchers is to draw conclusions that go beyond the actual evidence.

Fourth, the combination of all these factors leads to conclusions that have a significant likelihood of being mistaken.

Fifth, those conclusions will likely be taken much more seriously by the powers-that-be than by the researchers themselves. All the qualifiers inserted by researchers are usually removed by journalists and administrators.

Sixth, the consequences on the profession if negative results are taken seriously by persons in positions of power will be dire.

Under the circumstances, it seems to me that research that is designed to reveal negative facts about the way things are taught had better be airtight before being publicized. The problem is that there is no such research. This doesn’t mean that there is no answer to problems of ineffective teaching. But that is an issue for another day.

My Reply:

On the issue of motives: Of course it is fun to have striking research! Given my general skepticism about self-knowledge, including of motives, I won't attempt self-diagnosis. However, I will say that except for recent studies that are not yet complete, I have published every empirical study I've done on this topic, with no file-drawered results. I am not selecting only the striking material for publication. Also, in my recent pedagogy research I am collaborating with other researchers who very much hope for positive results.

On the likelihood of being mistaken: I acknowledge that any one study is likely to be mistaken. However, my results are pretty consistent across a wide variety of methods and behavior types, including some issues specifically chosen with the thought that they might show ethicists in a good light (the charity and vegetarianism measures in Schwitzgebel and Rust 2014). I think this adds to credibility, though it would be better if other researchers with different methods and theoretical perspectives attempted to confirm or disconfirm our findings. There is currently one replication attempt ongoing among German-language philosophers, so we will see how that plays out!

On whether the powers-that-be will take the conclusions more seriously than the researchers: I interpret Rickless here as meaning that they will tend to remove the caveats and go for the sexy headline. I do think that is possible. One potentially alarming fact from this point of view is that my most-cited and seemingly best-known study is the only study where I found ethicists seeming to behave worse than the comparison groups: the study of missing library books. However, it was also my first published study on the topic, so I don't know to what extent the extra attention is a primacy effect.

On possibly dire consequences: The most likely path for dire consequences seems to me to be this: Part of the administrative justification for requiring ethics classes might be the implicit expectation that university-level ethics instruction positively influences moral behavior. If this expectation is removed, so too is part of the administrative justification for ethics instruction.

Rickless's conclusion appears to be that no empirical research on this topic, with negative or null results, should be published unless it is "airtight", and that it is practically impossible for such research to be airtight. From this I infer that Rickless thinks either that (a.) only positive results should be published, while negative or null results remain unpublished because inevitably not airtight, or that (b.) no studies of this sort should be published at all, whether positive, negative, or null.

Rickless's argument has merit, and I see the path to this conclusion. Certainly there is a risk to the discipline in publishing negative or null results, and one ought to be careful.

However, both (a) and (b) seem to be bad policy.

On (a): To think that only positive results should be published (or more moderately that we should have a much higher bar for negative or null results than for positive ones) runs contrary to the standards of open science that have recently received so much attention in the social psychology replication crisis. In the long run it is probably contrary to the interests of science, philosophy, and society as a whole for us to pursue a policy that will create an illusory disproportion of positive research.

That said, there is a much more moderate strand of (a) that I could endorse: Being cautious and sober about one's research, rather than yielding to the temptation to inflate dubious, sexy results for the sake of publicity. I hope that in my own work I generally meet this standard, and I would recommend that same standard for both positive and negative or null research.

On (b): It seems at least as undesirable to discourage all empirical research on these topics. Don't we want to know the relationship between philosophical moral reflection and real-world moral behavior? Even if you think that studying the behavior of professional ethicists in particular is unilluminating, surely studying the effects of philosophical pedagogy is worthwhile. We should want to know what sorts of effects our courses have on the students who take them and under what conditions -- especially if part of the administrative justification for requiring ethics courses is the assumption that they do have a practical effect. To reject the whole enterprise of empirically researching the effects of studying philosophy because there's a risk that some studies will show that studying philosophy has little practical impact on real-world choices -- that seems radically antiscientific.

Rickless raises legitimate worries. I think the best practical response is more research, by more research groups, with open sharing of results, and open discussions of the issue by people working from a wide variety of perspectives. In the long run, I hope that some of my null results can lay the groundwork for a fuller understanding of the moral psychology of philosophy. Understanding the range of conditions under which philosophical moral reflection does and does not have practical effects on real-world behavior should ultimately empower rather than disempower philosophy as a discipline.

13 comments:

Anonymous
said...

I don't think I've ever assumed that philosophical ethics makes people more ethical in real-time, or even over the course of one person's life span. Human behavior, statistically averaged over many people, is far too robust.

The argument I'd always heard was instead that the study of ethics helped change societies over generational time scales. The moral cases against slavery and racism and sexism and ableism, the rights of children, the rights of animals, basic human/civil rights, etc., have all been strongly influenced by philosophical work over centuries.

Given this larger-scale defense of philosophical ethics, how much should anyone in philosophy really be worried about your fascinating work on real-time effects?

I've always wondered why anyone is surprised by this. An Ethics professor is defined by having a certain kind of theoretical knowledge and a training in that, not in any kind of training in practical knowledge/practical behavior/discipline in action/etc...

I'm more struck by how lacking empirical evidence of effectiveness people are willing to sell philosophy to the public (contrary to the other commenter I have seen time and again how ethics classes, especially in professional schools, are sold as having immediate impacts), there is so little willingness (or even interest) in trying to test such assumptions that faculty have no real response to admin or taxpayers as to why we should keep devoting time and funding to such efforts. even claims of teaching "critical" thinking more generally are called into question by this kind of either cog-bias and or willful ignorance, dmf

I'm just a philosophy student, so I don't think I'll be able to add much to this. But I'd be curious to also know the reasons why someone may not act ethically. For example, if an ethics professor does not donate to charity, what would be her reasons for it? Would it be because she just doesn't think it matters that much (which is not a really good reason) or is it because she thinks charities handle the money donated ineffectively? Is the professor who is not replying to students just have no interest in engaging with them, or is it a more structural issue, since more schools are not invested in the humanities, while also giving the professors more responsibilities without additional resources? At my school we keep losing professors because the school initially hires them as adjuncts and often fails to offer them a better position before another school does. I don't think the objection is that only positive results should be published, or no results whether positive, negative, or neutral, should be published, I think it would just be difficult to determine the right conclusion to draw from the data without considering so many other things, both individual reasons and institutional issues.

Is it not possible, as well, that a certain 'saturation effect' comes into play within a society? That maybe the fact that we do have our ethics and morals philosophers occassionally speaking up for what is right that it creates a society-wide effect of morality?

Kind of off topic: I think it'd be worth looking at the backgrounds of ethics students and whether they had problems in their lives in regards to ethics, that's why they took it up. Ie, ethics isn't teaching bad habits, it's attracting those with habits that would include stealing ethics books. If many psychologists are attracted to that field because they feel they have mental problems themselves (or so I've heard - maybe that's utter rubbish), that's a similar pattern and we wouldn't say teaching psychology gives people mental problems.

Indeed what if the ethics student was even more inclined to steal books before doing the classes - what if what seems worse during the course has actually become better than how they were before the course?

But yes, hiding the potential flaws of ethics courses...kind of seems to go against ethics.

On the other hand tho, if there is some variable like ethics students had behavior issues before they joined the course and if the ethics courses expose their neck to the media who wont pause to see if there are other issues that cause it, what is that?

I'm reminded of a business ethics course which included a jail visit to talk to accountants who had engaged in fraudulent activities. The authorities were initially suspicious that it was to pick up practice tips.

The problem lies in how ethics is taught, as an intellectual subject. In ancient schools, the education was tightly woven and cleaved to everyday life. So students of Epictetus or Socrates, went ahead and applied philosophy in everyday life

I find it hard to believe that any ethicist would characterize her work or her teaching as aiming to make people better simpliciter, especially on metrics like returning library books or calling your mom. In large part, the methodology in ethics presupposes agreement on core cases like murder, theft, promising, and tries to work out what to say in difficult or non-ideal cases. What we hope to give our students is a vocabulary and toolkit for working through tough cases on their own. That, plausibly, is what professional ethics courses aim at teaching. I have a hard time thinking that university administrators think that nursing and business students who take ethics classes will return more library books, stop eating meat, or call their moms after taking these classs.

There are two ways people can act badly: by acting contrary to their values (weakness of will), or by having and acting on bad values (wickedness).

Most of Schwitzgebel and Rust's tests of moral philosophers' behavior focus on fairly uncontroversial moral questions. They are thus asking whether training in moral philosophy helps people to avoid weakness of will. But why would anyone think that the purpose of moral philosophy is to strengthen the will?

Most moral philosophy courses focus on moral issues that are or ought to be controversial. We are giving people analytical tools to think through hard questions. These tools aren't designed to address weakness of will. They are designed to address wickedness, including forms of wickedness that are socially prevalent.

Ethics courses also give people greater awareness of the range of ethical views that are out there. An example: it is important for people in business to know that people disagree about whether consensual, mutually beneficial transactions can be wrongfully exploitative. It is important to know about the disagreement and the reasons for it no matter what view one holds and no matter what view is correct.

If one wants to test the effects of moral philosophy education, one should test its effects on people's moral beliefs. One should do this without assuming that one's own views about controversial moral questions (e.g. the ethics of eating meat) are the correct answers. It would also be helpful to test how ethics courses affect awareness of the extent of moral disagreement.

I wonder whether Schwitzgebel and Rust's studies of moral philosophers' behavior would have the same outcome if metaethicists defending skeptical views were excluded.

Do you know that things like the French revolution and the US constitution are based on works of ethicists? If nobody knew about their ideas, the world would be a much worse place. And it probably would be a better place if some ethical insights would have been known in society earlier to influence politics.

I have read this multiple times now, that critical thinking classes don't make one more critical of a thinker and that ethics classes do not make one more ethical. I am ordinarily very receptive to empirical evidence, but in these cases, I find it hard to square the reports that neither make a difference with my personal experience. I'm not just saying I feel it, which could be dismissed as biased poor-self-knowledge or something. I'm saying, before I took logic and critical thinking, I didn't notice fallacies and patterns in arguments; afterward I did, and the effects have endured to this day. Before I took philosophy in my 20s (I was a late arrival), I didn't think about some ethical/moral issues in some ways, for instance generally taking my parents for granted and imposing upon them regularly; afterward I did, and the effects have endured to this day. So I realize that my personal behavioral changes are logically compatible with findings that these classes generally don't work -- a pedant could argue that I'm an amazing exception -- but I do not believe I'm amazing. I rather suspect the empirical studies are digging in the wrong place. (Yes, I'm reffing Raiders of the Lost Ark.)