March 2018

03/21/2018

Here's a second video in a series by Pew Research. This 5 minute clip describes some of the issues in writing good questions for an opinion poll. It basically summarizes the first part of Chapter 6, and provides some new, concrete examples.

03/20/2018

Maybe you shouldn't get into the car with this guy. Credit: Shutterstock.com

You probably know drivers who honk, tailgate, and shake their fists; you know others who give drivers space and respect. Now researchers have identified a trait the aggressive drivers might share: Narcissism.

The participants answered questions from the Narcissistic Personality Inventory, a set of questions used since 1988 to measure narcissism. This questionnaire had participants rate how strongly they agreed with items such as: “I like to be the center of attention,” or “I am an extraordinary person” on a 1 to 5 scale. They then addressed similar items about aggressive driving behavior: “I often swear when driving a car,” or “When driving my car, I easily get angry about other drivers.” ...The researchers report that the more narcissistic drivers are, the more angry and aggressive they reported becoming on the road.

a) Let's talk measurement first. How was narcissism measured--Did they use a self report? An observational measure? or a physiological measure?

b) Now for the second variable, aggressive driving: How was this measured--Was it self report? An observational measure? or a physiological measure?

c) Was this a correlational or experimental study? How do you know?

d) Sketch a graph (with well-labeled axes) of the results of the study.

Next, the researchers conducted a lab-based study with university students. They measured narcissism just as they'd done before, but they measured aggressive driving differently. Here's how they measured aggressive driving in the lab:

...participants sat in the driver’s seat of a 2010 Honda Accord, surrounded on three sides by a curved projection screen. In a 15- to 25-minute driving exercise, the participants saw other computer-generated cars and were told that some of them were being operated by other study participants. (In fact, the experimenters were controlling the other vehicles.)

During the exercise, the participants encountered:

a car pulling suddenly in front of them;

a traffic jam with two 10-second full traffic stops, one after another;

a construction zone with one lane closed and the other slowed down;

a second car mimicking the human driver’s behavior; and

a traffic light that was red for 60 seconds and green for just 5 seconds.

The researchers found that the participants who scored high on narcissism measures were more likely to tailgate, speed, drive off-road, cross the center line into oncoming traffic, drive on the shoulder, honk their horn, or use “verbal aggression” or “aggressive gestures,” in the experimenters’ chaste wording.

e) In this study, how was aggressive driving measured--Did they use a self report? An observational measure? or a physiological measure?

f) Was this a correlational or experimental study? How do you know?

g) Sketch a graph of the results of the study. Label your axes mindfully.

h) Can you think of moderators of this basic relationship? For example, might there be situations or settings for which narcissism is especially strongly linked to aggressive driving? (As you answer, consider this: Past work on narcissism has established that narcissists aren't always aggressive; they are mainly aggressive when others reject them or when they are provoked.) Create a moderator table like those seen in Chapter 8 (e.g., Figure 8.19 or Table 8.5)

i) Can you think of a mediator that explains the relationship between narcissism and aggressive driving? If so, sketch a mediator diagram like those seen in Chapter 9 (e.g., Figure 9.11 or Figure 9.13)

As you can see, these journalists (or their editors) attached extremely strong titles to their science articles! An actual scientist wouldn't describe the results of a study with such strong terms as "prove" or "They work." That's because research in science is a steady accumulation of evidence--each study teaches us a little bit more, but no study can "prove" a theory or a claim.

The "study" mentioned in the three headlines above was actually a meta-analysis of 522 clinical trials (that is, randomized controlled studies) of antidepressants. Here's a summary and interpretation according to the Neuroskeptic blog:

...the authors, Andrea Cipriani et al., conducted a meta-analysis of 522 clinical trials looking at 21 antidepressants in adults. They conclude that “all antidepressants were more effective than placebo”, but the benefits compared to placebo were “mostly modest”. Using the Standardized Mean Difference (SMD) measure of effect size, Cipriani et al. found an effect of 0.30, on a scale where 0.2 is considered ‘small’ and 0.5 ‘medium’.

a) Review: What does a meta-analysis do? Why might we value a meta-analysis over a single study?

b) When the journalist describes the Standardized Mean Difference (SMD), they are referring to a statistic very much like Cohen's d. As you can see, the conventions for SMD are the same as for Cohen's d. Do you agree that the effect size of .31 could be considered "modest" according to these conventions?

c) I wrote above that "no study can 'prove' a theory or a claim." But what about a meta-analysis--do you think meta-analyses are more likely to be able to prove a theory? Are they definitive? (Why or why not?).

The Neuroskeptic criticized the media's coverage of this meta-analysis on a couple of grounds. First, they pointed out how the results of the new study are almost exactly the same as several old studies, suggesting that the new study is not particularly groundbreaking:

The thing is, “effective but only modestly” has been the established view on antidepressants for at least 10 years. Just to mention one prior study, the Turner et al. (2008) meta-analysis found the overall effect size of antidepressants to be a modest SMD=0.31 – almost exactly the same as the new estimate.

Second, the Neuroskeptic cleverly points out that, a few years ago, the media assigned the opposite headline to virtually the same result:

Cipriani et al.’s estimate of the benefit of antidepressants is also very similar to the estimate found in the notoriousKirsch et al. (2008) “antidepressants don’t work” paper! Almost exactly a decade ago, Irving Kirsch et al. found the effect of antidepressants over placebo to be SMD=0.32, a finding which was, inaccurately, greeted by headlines such as “Anti-depressants ‘no better than dummy pills‘”.

d) What is a placebo, and why might it be important to use one in a study of antidepressants?

e) Why do you think the media wrote such different headlines about similar meta-analystic results?

Finally, here are some important additional comments from the Neuroskeptic article:

I’m not criticizing Cipriani et al.’s study, which is a huge achievement. It’s the largest antidepressant meta-analysis to date, including an unparalleled number of difficult-to-find unpublished studies (although both Turner et al. and Kirsch et al. did include some.) It includes a broader range of drugs than previous work, although it’s not quite comprehensive: there are no MAOis, for instance, and in general older drugs are under-represented.

Even so, Cipriani et al. meta-analyzed the evidence on all of the most commonly prescribed drugs, and they were able to produce a comparative ranking of the different medications in terms of effectiveness and side-effects, which is likely to be useful.

f) Explain why Neuroskeptic is praising Cipriani's study on its use of "difficult-to-find unpublished studies." Why is this important in meta-analysis?

If you’re a research methods instructor or student and would like us to consider your guest post for everydayresearchmethods.com, please contact Dr. Morling. If, as an instructor, you write your own critical thinking questions to accompany the entry, we will credit you as a guest blogger.