News feminist philosophers can use

Author: michaelsbrownstein

Over at the Brains Blog, there is an excellent Symposium on Alex Madva’s new paper, “A Plea for Anti-Anti-Individualism,” which was recently published in Ergo. There are replies from Saray Ayala-Lopez, Sally Haslanger, and Jenny Saul. Check it out here.

There has been some skepticism about the claim that 1 in 5 women is the victim of a sexual assault during college. The American Association of Universities came out with survey results from 27 schools in September 2015, which seemed to support the 1 in 5 number, but it was roundly criticized for a low response rate (19%) among other criticisms. A new survey has been released by the Bureau of Justice Statistics. It has improved questions, a much higher response rate (over 50%), and some sophisticated analyses to adjust for non-response bias. It yields nearly the same finding: 21% of women in the 9 campuses surveyed were victims of a sexual assault during college.

A new study published in Proceedings of the National Academy of Sciences reports that men and women are not equally receptive to experimental evidence of gender bias in STEM settings. Ian Handley and colleagues reported the results of three experiments. In the first and second experiment, men and women read an actual article abstract from a peer-reviewed scientific journal, which was accompanied by the article’s publication information and the first author’s full name. In the first experiment, participants were M-Turk workers; in the second, they were male and female STEM and non-STEM faculty. The abstract used in experiments 1&2 was from Corinne Moss-Racusin and colleagues’ (2012) PNAS article reporting gender bias in science faculty’s hiring decisions. In the first experiment of the new Handley study, men were significantly more likely than women to evaluate the abstract negatively. In the second experiment, male faculty in STEM departments displayed the same pattern; they evaluated the Moss-Racusin et al. (2012) abstract more negatively compared with female faculty in STEM departments. Amongst non-STEM faculty, men and women gave comparable evaluations. Finally, in the third experiment, Handley and colleagues replicated the main effect using a different abstract (from Knobloch-Westerwick et al. (2013)), which reports gender bias in reviews of scientific conference submissions. However, when the authors altered the abstract to report no gender bias, they found that women evaluated it more negatively than men.

This study has some obvious implications. The authors focus on the worry that no amount of evidence attesting to pervasive gender biases will be sufficient to convince skeptics, if gender biases are affecting skeptics’ assessments of that evidence.* They also discuss potential mechanisms driving these effects, in particular the idea that male faculty in STEM departments might find evidence of gender bias (perhaps implicitly) threatening (in accord with “Social Identity Theory”). More research on this is clearly needed.

What I want to consider briefly is the notion of “bias” at work in this study, and in coverage of it. David Miller, for example, describes the third experiment as showing that “women have their own biases” (here). Commenters have made similar points on Facebook. This is understandable, and is certainly true as a general point, since all human beings have biases, and women are human beings! Handley and colleagues saw a clear reversal in evaluations; when the abstract(s) reported gender bias, men were harsher, and when the abstract(s) reported no gender bias, women were harsher. The authors themselves point out that “individuals [not just men] are likely to demonstrate a gender bias toward research pertaining to the mere topic of gender bias in STEM” (3). One reason they conclude this is that the biases they detected were only relative to each other. There was no condition controlling for the effect of gender on participants’ evaluations.

However, it seems only right to conclude that both men and women are biased in these particular findings if there is no means to independently assess the quality of the evidence in the abstracts.** If it is true, though, that gender bias is pervasive in the domains described in the study materials, then women who give positive evaluations of studies finding gender bias, and negative ratings to studies not finding gender bias, are accurate, not biased.***

Similarly, if we presume that women (especially female STEM faculty) are more informed about research on gender bias than men, then we might give their abstract evaluations more credence.**** I’m grateful to Alex Madva for this point, who suggests an analogy: if a group of climate scientists negatively evaluated abstracts denying the existence of climate change, and a group of people who are not climate scientists rated the same abstracts positively, would we conclude that “everyone has their biases?”

Thanks to Alex Madva, Daniel Kelly, and Jennifer Saul for helpful suggestions on this post.

*Jennifer Saul has discussed similar concerns about the effects of implicit biases here.

** How might researchers at least approximate an assessment of the abstracts independent of rater-gender? Perhaps a team of independent mixed-gender reviewers? Or an average of all reviews, against which the ratings of men and women could be compared separately? Or simply compare the evaluations of abstracts by gender against the results of a meta-analysis of similar studies?

***Of course, gender bias could be truly pervasive in these domains, and it still be the case that any one study purporting to demonstrate gender bias is low quality. Note, though, that study participants were only asked to evaluate their agreement with the authors’ interpretation of the results in the abstract, the importance of the research, how well-written the abstract was, and what its overall quality was. If one believes that gender bias is pervasive, and reads an abstract reporting gender bias, one is likely to give positive answers to these questions. (Moreover, participants’ answers to these 4 questions were highly correlated, suggesting that they were answering based on an overall sense of the accuracy of the study’s findings.) Perhaps this is a limitation of the Handley et al. study. It would be interesting to find out if asking other questions would affect the results, such as “how rigorous do you think the study’s methodology is?” or “how much does this data contribute to the overall case for finding gender bias (or its absence) in STEM fields?”

****The authors did examine whether the amount of experience a person has had with gender discrimination correlated with their evaluations of the abstracts. (These data are found in the supplementary materials.) For women, they found no correlation. Interestingly, for men, they did find a correlation. The more (“reverse”) gender bias men reported having personally experienced, the more harshly they rated the abstracts.

Yesterday the journal Science published the results of the Open Science Collaboration’s effort to replicate 100 studies published in three top psychology journals (here). The results are arresting: overall, replication effects were half the magnitude of the original effects, and only 36% of replications had statistically significant results. The results were particularly bad for social psychology, for which only 14 of 55 studies were replicated (on the basis of significance testing).

The title of today’s coverage on Slatecaptured what seems to be a widespread reaction: “That Amazeballs Scientific Study You just Shared on Facebook is Probably Wrong, Study Says.” But is this really what the study says?

It’s worth reading the actual article in Science, rather than just the headline. For example:

Almost none of the replications contradicted the original studies. Instead, the effects of many of the replications were significantly weaker than the original effects. The replication efforts don’t therefore tell us that the findings of any particular study that didn’t replicate were false. Rather, it tells us that the evidence for those findings being true is considerably weaker than we might have thought.

It appears that the best predictor of replication success for any particular study was the strength of the original findings, rather than the perceived importance of the effect or the expertise/reputation of the original research team. In addition, surprising effects were less reproducible (surprise!), as were effects that resulted from more difficult/complicated experimental scenarios.

This is not a problem in psychology alone. It has been reported that in cell biology, only 11% and 25% of landmark studies recently replicated. Moreover, there may be good reasons why social psychology studies are harder to replicate than other studies in psychology. As Simine Vazire points out (here), the phenomena social psychologists study are extremely noisy. She writes, “if we still don’t know for sure, after years of nutrition research, whether coffee is good for you or not, how could we know for sure after one study with 45 college students whether reading about X, thinking about Y, or watching Z is going to improve your social relationships, motivation, or happiness?” That said, the Science study points out other reasons why social psychology studies were particularly unlikely to replicate: social psychology journals have been particularly willing to publish under-powered studies with small participant samples and one-shot measurement designs.

There is, of course, something very unsettling about these findings. But in the big picture it seems to me that this article is a testament to science working well. (Or, maybe, like Churchill said of democracy, it is a testament to science being the worst form of inquiry . . . except for all the others.) The fact that one of the most important scientific journals has published this article is itself confidence-inspiring. Vazire quotes Asimov saying that “the point of science is all about becoming less and less wrong.” Or as the Science article puts it:

“After this intensive effort to reproduce a sample of published psychological findings, how many of the effects have we established are true? Zero. And how many of the effects have we established are false? Zero. Is this a limitation of the project design? No. It is the reality of doing science, even if it is not appreciated in daily practice. Humans desire certainty, and science infrequently provides it. As much as we might wish it to be otherwise, a single study almost never provides definitive resolution for or against an effect and its explanation. The original studies examined here offered tentative evidence; the replications we conducted offered additional, confirmatory evidence. In some cases, the replications increase confidence in the reliability of the original results; in other cases, the replications suggest that more investigation is needed to establish the validity of the original findings. Scientific progress is a cumulative process of uncertainty reduction that can only succeed if science itself remains the greatest skeptic of its explanatory claims.”

Lauren Freeman
Department of Philosophy
University of Louisville
Lauren.Freeman@Louisville.edu

Although by no means mainstream, phenomenological approaches to bioethics and philosophy of medicine are no longer novel. Such approaches take the lived body – as opposed the body understood as a material, biological object – as a point of departure. Such approaches are also invested in a detailed examination and articulation of a plurality of diverse subjective experiences, as opposed to reifying experience under the rubric of “the subject” or “the patient.” Phenomenological approaches to bioethics and medicine have broached topics such as pain, trauma, illness, death, and bodily alienation – to name just a few – and our understandings of these topics have benefitted from and are deepened by being analyzed using the tools of phenomenology.

There is also a rich history of approaching phenomenology from a feminist perspective. Combining these two approaches and methodologies has furthered our understandings of lived experiences of marginalization, invisibility, nonnormativity, and oppression. Approaching phenomenology from a feminist perspective has also broadened the subject matter of traditional phenomenology to include analyses of sexuality, sexual difference, pregnancy, and birth. Moreover, feminist phenomenological accounts of embodiment have also helped to broaden more traditional philosophical understandings and discussions of what singular bodies are and of how they navigate the world as differently sexed, gendered, racialized, aged, weighted, and abled. Feminist phenomenological accounts and analyses have helped to draw to the fore the complicated ways in which identities intersect and have made the case that if we are really to understand first person embodied accounts of experience, then a traditional phenomenological account of “the subject” simply does not suffice.

The aim of this special issue is to explore and develop the connections between feminist phenomenology, philosophy of medicine, bioethics, and health. The issue will consider on the one hand, how feminist phenomenology can enhance and deepen our understanding of issues within medicine, bioethics, and health, and on the other hand, whether and how feminist approaches to medicine, bioethics, and health can help to advance the phenomenological project.

Topics appropriate to the special issue include, but are not limited to, feminist phenomenological analyses and/or critiques of:

· Health/care in constrained circumstances (i.e., in prisons, as migrants, in conditions without secure health insurance)

· Sex and gender

· Rape, sexual violence, or domestic violence

· Transgender and trans* experiences of embodiment, health, or healthcare

· Intersex experiences of embodiment, health, or healthcare

· Death and dying

· Palliative care and end of life

· Caregiving for ill friends, family members, and children

· Pregnancy, labor, childbirth

· Miscarriage

· Abortion, contraception, sterilization

· Organ transplantation

· Cosmetic surgery

· Body weight

· Addiction

· Mental illness

· Physical and cognitive disability

Submission Information

Word limit for essays: 8000 words.

IJFAB also welcomes submissions in these additional categories:

· Conversations provide a forum for public dialogue on particular issues in bioethics. Scholars engaged in fruitful exchanges are encouraged to share those discussions here. Submissions for this section are usually 3,000–5,000 words.

· Narratives often illuminate clinical practice or ethical thinking. IJFAB invites narratives that shed light on aspects of health, health care, or bioethics. Submissions for the section are usually in the range of 3,000–5,000 words.

Deadline for submissions: February 1, 2017

Anonymous review: All submissions are subject to triple anonymous peer review. The Editorial Office aims to return an initial decision to authors within eight weeks. Authors are frequently asked to revise and resubmit based on extensive reviewer comments. The Editorial Office aims to return a decision on revised papers within four-six weeks.

Submissions should be sent to EditorialOffice@IJFAB.org indicating special issue “Feminist Phenomenology and Medicine” in the subject heading.

Call for Chapter Proposals
I am submitting a proposal to Rowman and Littlefield International for a volume on Shame as part of an already accepted series on moral psychology and emotions, which was submitted by Mark Alfano. I invite chapter proposals from all disciplines and areas of study. Scholarly work in feminist philosophy, psychology, anthropology, sociology, and law are especially welcome. Proposals dealing with corollary issues like resentment and anger are welcome, as long as they are clearly and appropriately related to the central topic of Shame.

Submission Details
Proposals should be between 200-300 words, include citations, and should clearly describe the author’s thesis and provide an overview of the proposed chapter’s structure. All proposals should be prepared for blind review, removing any reference to the author. As a separate document, authors should provide a short CV containing contact information and relevant publications, presentations, and/or research on Shame. Please email your submission to rlshamevolume@gmail.com with the subject line “Shame volume proposal from [your name].”
Deadlines
Abstracts Due: August 14, 2015
Notification of Acceptance: August 31, 2015
Finalized Draft Due: December 31, 2016

In his opinion on the Texas Department of Housing and Community Affairs vs. Inclusive Communities Project — which interpreted “disparate impact” (i.e., discrimination without intent) as a legitimate cause of discrimination — Justice Anthony Kennedy writes, “recognition of disparate-impact liability under the FHA also plays a role in uncovering discriminatory intent: It permits plaintiffs to counteract the unconscious prejudices and disguised animus that escape easy classification as disparate treatment.” It’s terrific to see the Supreme Court seeming to recognizing implicit bias as contributing to discrimination. Also, as this Slate article points out, it also raises interesting questions about moral responsibility and implicit bias.