Tag Archives: Situationist Blog

Diederik Stapel, a prominent Dutch social psychologist, has admitted to fabricating data for dozens of published studies, as has been reported by New Scientist and Nature.

The full report on the extent of Stapel’s fraud is in Dutch, so I can’t tell exactly which findings of his were tainted; nevertheless, according to New Scientist, at least one of the affected studies includes a widely reported one finding that disorder in a person’s environment exacerbates racial stereotypes. I first read about this study when it was picked up by the Situationist (“The Disorderly Situation of Stereotyping“); others may have read about it at i09 (“Urban Decay Causes Ethnic Prejudice“).

Given the usual state of the desks of most public interest lawyers – including mine – I guess I’m pretty thankful that these results were fabricated. I’m also thankful that the damage to the field of social psychology from this one person’s fraud is probably not too severe (according to the Nature article linked above, Stapel wasn’t yet sufficiently prominent that his work appeared in major social psychology textbooks, although he was widely cited and worked with a lot of people in his field).

Still, I’m concerned that this was not an isolated incident. To me, the fact that the extent of this this fraud (in terms of the number of papers affected) exceeded that of other similar incidents in other fields (New Scientist mentions similar incidents in electronics and cancer research) just means that the field of social psychology took longer to catch on than the fields of cancer and electronics research did. If your fraud detection system is not too robust, then for every fraud you do detect there are probably numerous frauds that you haven’t yet noticed.

This is especially problematic to me because, if you’re interested in legal systems design, social psychology is the most pervasively relevant field of scientific inquiry. Judges and policymakers almost always base their decisions on how to structure legal systems at least in part on how they think people will behave in response to that structure. However, people’s intuitions about how they, or others, will act in response to any given situation are often dead wrong (see, for example, my recent post about institutional abuse). When practiced responsibly, social psychology can give policymakers a better understanding of the likely effects that their policies will have on people’s actual behavior.

And on a more personal note, as an Autistic person, I’ve used cognitive and social psychology research to get a better understanding of how people work – frequently a much better understanding of how people work than you can get from someone trying to explain their own feelings and behavior through introspection. Luckily, “people get more bigoted when the room is messy” was never a big part of my model of human behavior, and the parts of my model that are most significant (such as an understanding of social signaling and people’s tendency to understand themselves in terms of their intentions while understanding others in terms of their actions) are pretty well-established and widely replicated.

None of this can work if a significant portion of social psychology data are downright fabricated. It’s hard enough to deal with the pervasive over- and misinterpretation of results that actually exist (I’ll save this for a later post; in the meantime, you might want to check out the critiques of autism research over at the Autism and Empathy Blog to see an example of what I’m talking about). But people can critique studies for over-/misinterpretation just by reading them and observing that the experimental design and results lack conceptual validity. Since most studies don’t include raw data reports, and it’s hard to recognize fabricated data just by looking at a scatter plot, people have to just take on faith that the experimenters aren’t downright lying about what they did during the course of the experiment and what happened as a result.

I hope I’m overreacting, but it seems to me that the field is going to have to fundamentally change its peer review process to prevent this type of fraud from happening. They’re going to have to insist on reviewing not just a thorough description of how experimenters collected and analyzed their data, but also the raw data themselves, right down to any forms or computer programs used to collect it. They’ve got to put more of an emphasis on replicating results in different labs, with different researchers. They might even have to have random visits by the Institutional Review Board to the actual sites on which experiments are purportedly being held to make sure that they’re actually conducting them. It’s going to add a lot of paperwork, and it’s going to be a huge pain, but I can’t really see another option.

Time Magazine has a great article on the psychology of cover-ups in the context of the recent events at Penn State (trigger warning for discussions of sexual abuse). Here is a choice snippet:

When the actions of a group are public and visible, insiders who behave in an unacceptable way — doing things that “contravene the norms of the group,” Levine says — may actually be punished by the group more harshly than an outsider would be for the same behavior. “It’s seen as a threat to the reputation of the group,” says Levine.

In contrast, when the workings of a group are secretive and hidden — like those of a major college football team, for instance, or a political party or the Catholic priesthood — the tendency is toward protecting the group’s reputation by covering up. Levine suggests that greater transparency in organizations promotes better behavior in these situations.

The article also makes some other important observations: that people are more intervene if they think that their intervention will be supported by the community around them and not met with hostility for “butting in” to issues that aren’t their business, and that people are less likely to intervene when the bad actor is a respected authority figure and the victim is a member of a marginalized group (for example, a “troubled teen”).

All of these observations are incredibly important not only to the recent Penn State case but also to the law of institutions in general. There’s an institutional bias in our society that is particularly evident in our disability services systems (see, e.g., Bruce Darling’s testimony for ADAPT (accessible PDF)), criminal justice systems, and child services systems. Although abuse and other human rights violations in these institutions are rampant (see any of the links above), many defenders of institutional services delivery will explain abuse as the work of a few “bad apples” and not a problem with the institutions themselves. These explanations have a lot of intuitive appeal to those who have never actually experienced institutionalization or tried to be a whistleblower themselves. People would like to think that they’d report abuse all the way up the institutional hierarchy and also to the police and the media, and that anyone who fails to do so must simply be a bad person who is not like them in any way.

However, as this post by Amanda Forest Vivian illustrates, it’s incredibly difficult even for highly moral individuals to report abuse in many institutional and “community” programs. Like football staff at Penn State, staff at institutional program (and at many “community” programs) tend to form cohesive groups and are invested in protecting their reputation. Because these programs operate more or less out of sight from the rest of the community, they tend to respond to misbehavior by covering it up rather than publicly punishing their own members, as Levine noted in the Time article. Moreover, lower-level staff members often justifiably fear that whistleblowing will not actually end the abuse but instead may lead to retaliation by other staff members and supervisors (especially when the perpetrator is higher-ranking). Like McQueary at Penn State, even when a low-ranking staff member is disturbed enough to report abuse to a supervisor, they frequently do not feel empowered to follow up and report to outside authorities if the supervisor fails to take action; to do so would likely be perceived as insubordination.

This is why social sciences research on the environmental influences on social policing is so important. Unless community members and policy members understand that certain environmental factors are perpetuating and enabling institutional abuse, they won’t be able to commit to eliminating those factors from our service delivery systems.

I’ve spent the last several months graduating from law school and moving instead of posting to Whoselaw. I hope to start posting regularly here again soon.

My first “welcome back” post is going to be this link to this lovely piece on fraud against the elderly, recently published in the Elder Law Journal (link to SSRN) and featured in the Situationist Blog. The article examines many of the cognitive biases that financial scammers exploit when they target elderly individuals and argues that education-based interventions against financial crime will be ineffective because they fail to address these biases.

This has become a personal issue for me because scammers (and also telemarketers) have been recently targeting my grandmother. Like the individuals discussed in Barnard’s piece, she is financially savvy and fiercely independent – not the type to want to listen to an educational program (“what, do you think I’m an idiot?”). Still, she talks at length to telemarketers and has been repeatedly baffled by lottery fraud letters she receives, telling her that she has won some well-known sweepstakes but must first pay “taxes” before receiving her prize.

Aside from increased enforcement, I wonder if education-based interventions would work better if senior citizens like my grandmother could envision themselves as not potential victims but rather potential law enforcers. Few people targeted by scams are the only ones at risk from those scams, so alerting the police even when you receive a solicitation for a fraudulent scheme is likely to protect other people (as long as the police actually act on this information, which I’ll discuss in a bit). My grandmother (like a lot of people) does not like to see herself as someone who needs to be protected, but will respond well if she thinks she’s protecting others less competent than herself. Framing educational programs, at least in part, as about catching criminals and protecting others is likely to attract a lot more of the independent, financially savvy types who, ironically, are the most likely to themselves be the victims of fraud. And, of course, vigilant and engaged citizens make law enforcement easier.

That said, such a program would have to be backed up with serious resources toward enforcing laws against fraud. An enforcement system that relied primarily on fines, or that only prosecuted cases of completed, big-dollar fraud, would give senior citizens an inadequate incentive to report attempted fraud. Would-be crimefighters need to believe that the police will act on their tips, and that as a result, a criminal will be “taken off the street” or otherwise prevented from victimizing others in the future.

The lottery scammers who targeted my grandmother are a good example of this: these people were conducting their scheme through the mails (from Canada, it turns out), and had indicated a return address to which “taxes” should be sent. It would not be too difficult for police to simply stake out the post office box indicated in the letter and arrest anyone who came to check it. This type of technique is routinely used for drug traffickers; why not use it on people who try to steal from our parents and grandparents? Is it simply because we think that with all the educational programs there are out there, anyone who falls for this type of thing is “stupid” enough to deserve it?

The Situationist Blog recently posted about an interesting new study on the human ability to inflict pain on others.

Dominic J. Packer, of Ohio State University, performed a statistical meta-analysis on several of the original Milgram experiments, in which experimental participants were asked to administer progressively severe electric shocks to another individual (the other person was in reality an actor who was not in fact receiving shocks). Despite the victim’s expressions of severe pain, pleas to be released, and, eventually, silence, over two-thirds of participants continued “shocking” the victim up to 450 volts. These participants were not sadistic or callous – in fact they usually showed signs of extreme distress – but were unable to resist the persistent directions of the researcher that the experiment “must” continue.Ethical concerns prevent psychologists from conducting this type of study again, at least not in the same exact form. However, Packer was able to statistically analyze eight studies that Milgram performed several years ago.

The meta-analysis indicated that of the participants who disobeyed, about 37% did so at 150 volts, which is when the “victim” first asked to end the study. Considering that there were 28 other potential moments where the participants could have stopped, this size of a cluster around 150 volts is very significant.

The other most common points of disobedience were at 315 volts, 300 volts, and 180 volts. However, although the overall level of disobedience varied across the eight studies, most of this variation happened at 150 volts, while the rate of disobeying at other points stayed largely the same across the different studies. Thus, a variation in the experiment that made people more likely to disobey, did so by making people more likely to disobey when the learner first asks to leave, not at some other point.

But wait, there’s more: psychologist Jerry Burger, of Santa Clara University, has recently replicated Milgram’s experiment. As I pointed out above, ethical rules prohibit psychologists from performing experiments identical to Milgram’s, so Burger’s experiment ended after the 150-volt mark. As in the original experiments, a great majority of the subjects administered the 150-volt shock – despite the victim’s request to leave – and would have been willing to continue had the experiment not been stopped.

Packer calls attention in his study to its potential implications in situations where potential victims have no recognized right to leave a particular situation, such as treatment of prisoners. Since participants did not seem to respond to escalating expressions of pain, it is not reasonable to expect interrogators to stop an interrogation practice when it appears to be too painful. But the study may be even more relevant to the treatment of people (especially children) with disabilities, whose protests to abusive treatment are frequently ignored and dismissed.

It could, for example, shed light on an incident where a prank phone call lead caretakers of children with disabilities to shock them dozens of times within a few hours. In that particular group home, electric shock was used as an “aversive therapy” for those children, authorized through a “substituted judgment” proceeding through which a judge decides that the child “would have consented” to the treatment were they competent to make such a choice. This is even worse than an interrogation situation, where victims’ requests to end interrogation are simply not respected; in the case of these children, at no point are the child’s protests and attempts to avoid the shock even considered the child’s own choice.

Alternately, we can imagine (rather optimistically) that in situations where people aren’t paying attention to requests to stop, they may compensate by paying attention to other factors. For example, the people who ended the experiment at 150 volts may have reasoned until that point that their victim was implicitly consenting to the shocks by not asking to be let free; these people may have been more attentive to other signals that it’s “time to stop” if they know the victim is unable to make such a request or have been told to disregard such requests as illegitimate or inauthentic. It may seem hard to imagine such a result given the widespread level of abuse against people with cognitive disabilities, but remember that even the Milgram experiments, the majority of participants ignored the requests of an apparently competent adult to end the experiment. Thus, even if people do begin focusing on other factors when their victims are unable (or have no right) to ask them to stop, we wouldn’t necessarily expect most people to actually stop. That said, I don’t if any studies have been done that would support or refute this theory.

Overall these two studies emphasize the vulnerability of people whose choices, even choices to avoid pain, are disregarded or seen as not really their own. Although the choices of even perceived “competent” choice-makers are often disregarded in the face of authoritarian pressure, it is respect for those people’s choices that seems most important in causing people to resist those pressures. Take away that respect, and hope of humane treatment could grow incresingly dim.