www.elsblog.org - Bringing Data and Methods to Our Legal Madness

04 August 2006

Video Game Violence in Court

On Monday, the United States District Court for the District
of Minnesota struck down the Minnesota Restricted Video Games Act. See Entertainment Software Assoc. v. Hatch, No. 06-CV-2268 (D. Minn.
2006). The Act prohibited minors [below 17] from knowingly renting or purchasing a video
game carrying an ESRB rating of AO (Adults Only) or M (Mature). As of this
moment, only 23 titles carry an AO rating (including only one game of any note,
Grand Theft Auto: San Andreas),
but 989 titles carry an M rating (including my own current favorite, Splinter Cell: Chaos Theory). The
penalty imposed on a minor for violating the Act was a $25 fine. I'll focus on the use of the empirical evidence in this case.

The state relied primarily on a meta-analysis by Professor
Craig Anderson of Iowa State University, who
summarized his findings as follows:

First, as more studies of violent
video games have been conducted, the significance of violent video game effects
on key aggression and helping-related variables has become clearer. Second, the
claim (or worry) that poor methodological characteristics of some studies has
led to a false, inflated conclusion about violent video game effects is simply
wrong. Third, video game studies with better methods typically yield bigger
effects, suggesting that heightened concern about deleterious effects of
exposure to violent video games is warranted.

The magnitude of these effects is
also somewhat alarming. The best estimate of the effect size of exposure to
violent video games on aggressive behavior is about 0.26[.]

The court did not mention Anderson’s findings in any detail, but it did
find his article “completely insufficient to demonstrate an empirical, causal
link between video games and violence in minors.” (p. 6) The Eighth Circuit
actually called for demonstrating a link between violent video games and psychological harm to minors, not violence in minors. Whether or not the
court meant something different than the Eighth Circuit in its characterization
of the issue, it found Anderson’s
methodology unpersuasive:

Even assuming the methodology employed by Dr. Anderson to be
correct, (n.1) Dr. Anderson’s meta-analysis is far too slight to bear the
weight of the State’s argument.

(n.1) Dr. Anderson's meta-analysis seems to suggest that one
can take a number of studies, each of which he admits do not prove the
proposition in question, and "stack them up" until a collective proof
emerges. It is fair to say that his article does not, on its face, demonstrate
the validity of this thesis. In making this observation, the Court sees no
present need to undertake a Daubert
analysis concerning the article's admissibility -- especially when the article
itself identifies empirical flaws which keep it from actually supporting the
State’s purported interests. See Daubert
v. Merrell Dow Pharm., Inc., 509 U.S. 579 (1993).

Entertainment Software,
slip op. at 6-7.

The court clearly doubted the value of using a meta-analysis.
The court even sounds dismissive of Anderson’s
work. Unfortunately, the judge seemed to think a meta-analysis involves taking multiple
zeros and somehow summing them to one. After all, if one of these studies
actually makes the definitive link between violent video games and harm to
minors, then why not just rely on that study? If none of them makes the link,
then what more is there to say? Why bother with a meta-analysis?

The court’s view, however, is out of sync with the more complicated world
of empirical research. A single study that provides the definitive answer to
some question is rare, especially in the social sciences. It’s more common to
find a pile of useful studies, some of which are, much to everyone’s
frustration, contradictory. When there are not even enough studies to make a
pile, one must be cautious about relying on what is available. One point of a
meta-analysis is to avoid putting undue weight on any one study. We usually
want a variety of people researching a question, using different samples and
different methodologies. We want multiple angles on the problem -- and then we
want a sensible way to assess the collective results. A meta-analysis is a
widely accepted means of making this assessment. See generally Mark W. Lipsey and David B. Wilson, Practical Meta-Analysis(Sage 2000);Morton Hunt, How Science Takes Stock: The Story of Meta-Analysis (Russell Sage 1997).

On the topic of video game violence, there is a small pile
of research. Anderson relied on forty-six studies, and some method is needed to summarize their
findings. One approach would be to count how many studies found a link between
video game violence and psychological harm and how many did not. The side with
the biggest number then wins. But this approach is too simplistic. It ignores,
for example, that some studies likely involve larger samples and therefore
deserve more weight. Meta-analysis techniques (on which I am admittedly no
expert) attempt to go even further, assessing, for example, the average effect
size of some variable across multiple studies. Shouldn’t this be what the court
wants?

The court offered several other objections to the study, one of which was problematic. It was that “the body of violent video game
literature is not sufficiently large to conduct a detailed meta-analysis of a
specific feature.” Entertainment Software,
slip op. at 6. What does this mean? A specific feature of what? Based on the
court’s discussion, it’s not at all clear. You have to look at Anderson’s study to figure it out -- and then
it’s clear the court’s objection is misplaced. Anderson identified nine methodological
weaknesses in the existing research and then divided the work into two
categories, “best practices” and “not best practices.” Research in the best
practices category avoided these weaknesses. See Anderson
at 116. The number of studies is not large enough to determine the effects of
these nine weaknesses, but as Anderson
broke down his results by research exhibiting best practices and not best
practices, it’s not clear why the court considered this issue important,
especially since the best practices research showed larger effects from exposure to violent video games.

Despite my concerns about how the empirical evidence was used
in this case, I still think the court’s result was correct. Minnesota apparently offered little more evidence that was video game specific than Anderson’s study. While
his study is likely of significant value to those familiar with the literature
on video game violence, it does not provide enough information for those of us
unfamiliar with the literature to fully evaluate the findings. Anderson speaks of five outcome variables in the literature, including “aggressive
behavior (defined as behavior intended to harm another person),” Anderson at 115, and he
speaks of an effect size on aggressive behavior of .26, id.
at 120, but what exactly does this mean? How do these studies operationalize
aggressive behavior or “behavior intended to harm another person”? I don’t see how the state could meet its burden unless it explained how these studies defined the outcome variables in more
specific terms.

The court also said it could not tell “whether violent video
games cause violence, or whether violent individuals are attracted to violent
video games.” Entertainment Software,
slip op. at 7. Fair point. We need to know more about the experimental studies
in the literature. For example, did the researchers recruit participants
without regard to their pre-existing interest in video games? Notably, the
state conceded it could not show any sort of causal link between violent video games and harm, broadly defined. Id. at 7. But there are some relevant
experimental studies -- Anderson broke down his analysis by experimental and correlational categories -- so it
would be very helpful to know more about them.

Based on the court’s discussion, it looks like the state
failed to provide some important information. Of course, even if the state
provided all of the necessary information to fully understand what the
empirical evidences shows, a very tough issue remains: how much evidence of how
much of an effect is sufficient? Understanding the empirical evidence does not
answer this question.

*There was also an issue in this case about an improper delegation
of authority to ESRB, which provided an independent ground to strike down the Minnesota law.

Comments

Rob,

My question at the end doesn't really have any kind of clear answer, but your question about how often a court would feel pressured to accept the empirical evidence sounds testable.

While it would be tough with judges (the ideal), rooms full of law students are easy to come by. Find out their prior views, provide the controlling legal standard, and then provide various amounts of evidence. Maybe have three different versions of the same quasi-fictional study, one with strong findings, one with weak findings, and one in the middle. See whose conclusions change.

It would be interesting to see how many people can be dislodged from their initial views in a case like Entertainment Software Association and how much evidence it takes to do so. (There must already be some studies of this sort?)

This doesn't really answer your question, but my sense is that courts are in the habit of creating standards for empirical evidence in constitutional cases that are so slanted in one direction or another that the result is almost foreordained. I'm thinking right now of the secondary effects cases, where despite some really terrible studies, the Court was quite willing to accept the findings. On more traditional First Amendment questions, though, I think the complexity of the relationships, the number of variables involved, and the difficulty in showing causation will always give unwilling judges enough cover to stay that the state has not met its burden.

This case also makes me think about David Faigman's piece "NORMATIVE CONSTITUTIONAL FACT-FINDING": EXPLORING THE EMPIRICAL COMPONENT OF CONSTITUTIONAL INTERPRETATION," 139 U. Pa. L. Rev. 541 (1991), where he makes a good case that the Supreme Court, when faced with pretty solid empirical evidence in _McClesky v. Kemp_ that murderers in Georgia who killed whites were far more likely to get the death penalty, changed the standard of what evidence was needed to show discrimination.

My distilled point (which applies to Daubert as well): how often would a court ever feel pressured to accept empirical evidence it does not want to accept? While the initial question--what evidence should be good enough--is an interesting one, I think this question is the more urgent one.