...research suggests that Tetris can ease us through periods of anxiety by getting us to a blissfully engrossed mental state that psychologists call "flow."

"The state of flow is one where you're completely absorbed or engaged in some kind of activity," Sweeny explains. "You lose your self-awareness, and time is just flying by."

Here's more on the detail:

Sweeny and her collaborators gathered a group of more than 300 college students and told them their peers would be evaluating how attractive they were. "I know, it's kind of cruel, but we found it's a really effective way to get people stressed out," Sweeny says. While the participants awaited their attractiveness scores, the researchers had them play Tetris.

Some played a painfully slow, easy version of the game — which bored them. Some played an extremely challenging, fast version — which frustrated them. And everyone else played the classic version, which adapts to each player's individual skill level and gets them into that state of flow.[People were randomly assigned to the three groups.]

In the end, everyone experienced a degree of worry. But the third group reported slightly higher levels of positive emotions (on average, about a quarter of a point higher on a five-point scale) and slightly lower levels of negative emotions (half a point lower on a five-point scale).

"It wasn't a huge difference, but we think it's noticeable," Sweeny says. "And over time, it can add up."

Questions:

a) In this study, they decided to manipulate the conceptual variable, "degree of flow." How did they operationalize this variable?

b) What were the dependent variables in this study? (there seem to be two DVs here)

c) What was the independent variable? What were its levels?

d) Does this seem to be an experiment or a correlational study? How do you know?

e) Sketch a graph of the results.

f) The journalist mentions details about the results (e.g., "about a quarter of a point higher on a five-point scale" and "half a point lower on a five-point scale"). Which aspect of statistical validity is being discussed here?

g) What questions would you ask to decide if this study was internally valid? Which of the internal validity threats in Table 11.1 could you rule out? Which could you ask about?

h) What about the external validity of this study? How might you see if this effect might generalize to other flow-related activities (other than Tetris)?

09/10/2018

College graduates were more likely than those who'd not been to college to report they are "smarter than average." Is their perception overconfident, or not? Photo: PeopleImages/Getty Images

It seems to be conventional wisdom that people are overconfident in their own abilities. People tend to think they are nicer, smarter, and better looking than most other people. But what's the evidence? The scientist-authors of this Wall Street Journal summary explain,

The claim that "most people think they are smarter than average" is a cliche of popular psychology, but the scientific evidence for it is surprisingly thin. Most research in this area has been conducted using small samples of individuals or only with high school or college students. The most recent study that polled a representative sample of American adults on the topic was published way back in 1965.

The authors, Patrick Heck and Christopher Chabris, worked with a third colleague.

..[W]e conducted two surveys: one using traditional telephone-polling methods, the other using internet research volunteers. Altogether we asked a combined representative sample of 2,821 Americans whether they agreed or disagreed with the simple statement "I am more intelligent than the average person."

Here are some of the results:

We found that more than 50% of every subgroup of people -- young and old, white and nonwhite, male and female -- agreed that they are smarter than average. Perhaps unsurprisingly, more men exhibited overconfidence (71% said they were smarter than average) than women (only 59% agreed).

Perhaps "overconfidence" is really accuracy? Consider this pattern of results:

In our study, confidence increased with education: 73% of people with a graduate degree agreed that they are smarter than average, compared with 71% of college graduates, 62% of people with "some college" experience and just 52% of people who never attended college.

The accessible Wall Street Journal summary is paywalled, but the original empirical publication is open-access in PLOS One.

Questions

a) What kind of study was this? Survey/poll? Correlational? Experimental? What are its key variables?

b) The authors found that more than 50% of every subgroup of people considered themselves smarter than average. Why is this result a sign of overconfidence?

c) The authors of this piece state that their combined sample was "representative". Re-read the section on how they got their sample and then make your own assessment--is the sample representative? (i.e., how is its external validity?). What population of interest do they intend to represent?

d) Sketch a graph of this result:

73% of people with a graduate degree agreed that they are smarter than average, compared with 71% of college graduates, 62% of people with "some college" experience and just 52% of people who never attended college.

e) In concluding their article, the authors wrote, "Our study shows that many people think they are smarter than they really are, but they may not be stupid to think so." What do you think? To what extent does this study's results support this conclusion?

e) Ask a question about this study's construct, internal, external, and statistical validity.

b) What foods might be associated with your own cultural identity (or identities?)

Here are some elements of the journalist's story. NPR reported about...

...a recent study in the Journal of Experimental Social Psychology, authored by Jay Van Bavel, social psychologist at New York University and his colleagues. The researchers found that the stronger your sense of social identity, the more you are likely to enjoy the food associated with that identity. The subjects of this study were Southerners and Canadians, two groups with proud food traditions.

The first experiment, containing 103 people, found that the more strongly someone self-identifies as Southern, the more they would expect Southern food to taste good, food like fried catfish or black-eyed peas.

c) In the study above, what are the two variables? Do they seem to be manipulated or measured?

d) Given your answer to question c) is this study really an "experiment"?

e) Can this study (above) support the causal claim that "identity impacts the food you like"? What are some alternative explanations? Hint: Think about temporal precedence and third variable explanations.

Here's the description of a second study:

In a second experiment, containing 151 people, researchers also found that when Southerners were reminded of their Southernness — primed, in psychology speak — their perception of the tastiness of Southern food was even higher. That is, the more Southern a person was feeling at that moment, the better the food tasted [compared to a group who was not primed].

e) What are the two variables in the study above? Were the variables manipulated or measured?

f) Given your answer to question e) is this study really an "experiment"?

g) Can this study support the claim that "identity impacts the food you like"?

They found a similar result when taste-testing with Canadians, finding that Canadian test subjects only preferred the taste of maple syrup over honey in trials when they were first reminded of their Canadian identity.

h) You know the drill: For the study above, what kind of study was is? What are its variables?

i) Challenge question: Can you tell if the independent variable in the Canadian study was manipulated as between groups or within groups?

In sum, it appears that two out of the three studies reviewed by this NPR article were experimental, so they're more likely to support the causal claim about "identity impacting the food you like." The journalist calls attention to this manipulation of identity in this description:

The relationship between identity and food preference is not new. However, the use of priming to induce identity makes this study different from its predecessors.

"Priming is like opening a filing drawer and bringing to your attention all the things that are in the drawer," says Paul Rozin, food psychologist at University of Pennsylvania, who was not involved in the study. "You can't really change peoples' identities in a 15-minute setting, but you can make one of their identities more salient, and that's what they've done in this study."

j) What other ways might you manipulate cultural identity in an experimental design?

Good news! The empirical journal article is open-access here. When you read it, you'll see that the journalist simplified the design of the studies for her article in NPR.

05/10/2018

I'm standing at my desk as I compose this post....could that make my writing go better? Yes, according to an editorial entitled, "Standing up at your desk could make you smarter." The editorial leads with a strong causal claim and then describes three studies, each with a different design. Here's one of the studies:

A study published last week...showed that sedentary behavior is associated with reduced thickness of the medial temporal lobe, which contains the hippocampus, a brain region that is critical to learning and memory.

The researchers asked a group of 35 healthy people, ages 45 to 70, about their activity levels and the average number of hours each day spent sitting and then scanned their brains with M.R.I. They found that the thickness of their medial temporal lobe was inversely correlated with how sedentary they were; the subjects who reported sitting for longer periods had the thinnest medial temporal lobes.

a) What were the two variables in this study? Were they manipulated or measured? Was this a correlational or experimental study?

b) The author writes that the study "showed that sedentary behavior is associated with reduced thickness of the medial temporal lobe." Did he use the correct verb? Why or why not?

Here's a second study described in the editorial:

Intriguingly, you don’t even have to move much to enhance cognition; just standing will do the trick. For example, two groups of subjects were asked to complete a test while either sitting or standing [randomly assigned]. The test — called Stroop — measures selective attention. Participants are presented with conflicting stimuli, like the word “green” printed in blue ink, and asked to name the color. Subjects thinking on their feet beat those who sat by a 32-millisecond margin.

c) What are the two variables in this study? Were they manipulated or measured? Was this a correlational or experimental study?

d) Does this study support the author's claim that "you don't have to move much to enhance cognition; just standing will do the trick"? Why or why not?

e) Bonus: What kind of experiment was being described here? (Posttest only, prettest/posttest, repeated measures, or concurrent measures?) Comment, as well, on the effect size.

It’s also yet another good argument for getting rid of sitting desks in favor of standing desks for most people. For example, one study assigned a group of 34 high school freshmen to a standing desk for 27 weeks. The researchers found significant improvement in executive function and working memory by the end of the study.

f) What are the variables in this study? Were they manipulated or measured?

g) Do you think this study can support a causal claim about standing desks improving executive function and working memory?

The author added the following statement to the third study on high school freshmen:

True, there was no control group of students using a seated desk, but it’s unlikely that this change was a result of brain maturation, given the short study period.

h) What threat to internal validity has the author identified in this statement?

i) What do you think of his evaluation of this threat?

j) Of the three studies presented, which provides the strongest evidence for the claim that "standing up at your desk could make you smarter"? What do you think? On the basis of this evidence, should I keep standing here?

04/20/2018

The study found an estimated 12% higher rate of fatal accidents after 4:20pm on April 20. Credit: Lars Hagberg/AFP/Getty Images

Here's a study that took advantage of "4-20", an unofficial holiday which people celebrate by holding pot-smoking parties starting at 4:20pm. Here's how the quasi-experiment was described in a New York Times story:

Researchers used 25 years of data on car crashes in the United States in which at least one person died. They compared the number of fatal accidents between 4:20 p.m. and midnight on April 20 each year with accidents during the same hours one week before and one week after that date.

a) What are the "independent" and dependent variables in this study? (And why did I put independent variable in quotes?)

Here's how the journalist described the results:

Before 4:20 p.m. there was no difference between the number of fatalities on April 20 and the number on the nearby dates. But from 4:20 p.m. to midnight, there was a 12 percent increased risk of a fatal car crash on April 20 compared with the control dates.

b) Of the four quasi experimental designs, which seems to be the best fit: Non-equivalent control group posttest only? Non-equivalent control group pretest-posttest? Interrupted time series design? Non-equivalent control group posttest-only design?

c) Sketch a graph of the results described.

d) The Times reported that "The increased risk was particularly large in drivers 20 and younger." Why might the researchers have included this detail?

e) The Times's headlineread, "Marijuana Use Tied to Fatal Car Crashes". What kind of claim is this? (Frequency, Association, or Cause?)

f) Towhat extent can these results support a causal claim about marijuana causing crashes? Apply the three causal criteria to this design and results.

04/10/2017

Does giving a child a sip change his or her long-term drinking habits? Photo: Tang Ming Tung/Getty Images

It's a strong causal claim: Giving kids sips of beer turns them into teenage drunks. Did the journalist get it right? Here are some quotes from the story, posted in the food website Munchies:

Those innocent tastes of Chianti at the Thanksgiving dinner table could morph your child from a sweet, sober cherub into a bleary-eyed teenage booze-guzzling ne'er-do-well.

New research in the Journal of Studies on Alcohol and Drugs has found that children who sip alcohol as youngsters have an increased likelihood of becoming drinkers by the time they reach high school. In a long-term study by Brown University of 561 students in Rhode Island, researchers found that those who had tried even small sips were a whopping five times more likely to have tried a whole beer or cocktail by the time they reached ninth grade, and four times more likely to have gotten rip-roaring drunk.

a) What keywords in this quote indicate that the journalist is making a causal claim?

b) What were the two variables studied by the researchers? Explain whether whether you think each one was measured or manipulated.

c) What kind of study is this claim apparently based upon--correlational or experimental?

d) Given the study's design, is the causal claim appropriate? Apply the three causal criteria.

In an interview with Munchies, the lead researcher, Kristina Jackson, mentions several possible third variables for the association:

But Jackson also believes that other factors correlate with these numbers, in addition to the "early sipper" factor. Parents' drinking habits, a family history of alcoholism, and general personality and behavioral characteristics also have strong impacts on the boozy worldviews of children and teenagers.

e) Chapter 9 readers: Do you see any evidence that the researchers controlled for these potential internal validity problems in their analyses? You might have to hunt down the original journal article to find out.

f) The journalist made a dramatic point about the statistic about kids who'd sipped beer "being four times more likely to have gotten rip-roaring drunk." Which of the four big validities is this statement about?

Even though the journalist's causal claim is probably not justified, adolescent substance use is a serious issue. The journalist supplemented the story with several frequency claims. You might be interested in some of these statistics.

Roughly 30 percent of the students said that they had tasted alcohol when in sixth grade..., mostly due to exposure from their parents while at a party, on vacation, or in other special circumstances. Of that group (the "early sippers"), 26 percent reported having consumed a full alcoholic drink by ninth grade, while only 6 percent of non-early-sippers had experienced the pleasures of an ice-cold Natural Ice or homemade Screwdriver. And at that same age (roughly 14-15 years old), 9 percent of early sippers had gotten totally trashed, while only 2 percent of those with less-loose parents had.

The year 2016 provided multiple references to implicit and explicit racial biases, especially in politics. So you might be wondering, What does it mean to hold "implicit biases?" Why are people biased against some ethnic groups, and what can we do about it?

It turns out there is a strong research tradition concerned with measuring and correcting implicit bias. There's a series of short videos grouped under the title, What, me biased? Each presents a real-world situation relevant to racial bias and discusses a research study.

a) In the opening minute, TV host Heather McGhee poses a theory about how to reduce racism to the caller. What is the theory? How did the researchers use data to test the theory?

b) What was the independent (manipulated) variable in the study? What was the dependent variable?

c) How do you know the study was an experiment? Was it an independent groups or within groups design?

d) Sketch a graph of the result, labelling your axes mindfully.

e) Work through the theory-data cycle: Did the data support Ms. McGhee's theory, or not?

More Resources for Instructors:

There are seven videos in this series, providing other opportunities to practice research methods concepts. For example, students can practice an individualized version of the theory-data cycle (Chapter 1), where Dr. Dolly Chugh discusses the idea of an "audit" in the video, Check Our Bias to Wreck Our Bias.

[H]azard perception...involves visually scanning the road ahead for clues that a dangerous situation may be developing, such as a pedestrian getting ready to cross the street or cars up ahead starting to brake. This sounds simple enough, but research suggests that a knack for this kind of visual scanning actually takes years – even decades – to learn.

Here's a research finding quoted in the article:

[N]ovice drivers, particularly teens, are so much more accident prone compared to older, more experienced drivers. Eye-tracking studies have shown that less experienced drivers tend to look at the road right in front of them, while more experienced drivers tend to automatically look far ahead, scanning all around the road for signs of trouble.

a) Is the finding above from a correlational or experimental study? What are the two main variables in the result? If it's an experimental study, what is its design?

Here is a second research finding quoted in the article:

...research has also demonstrated that even very short interventions can lead to major improvements in driving safety.

In one California study, drivers who had just passed an on-road driving test were randomly assigned to either receive a 17-minute hazard perception training or to receive no additional training. Over the course of the following year, male drivers who received the training had a rate that was nearly 25% lower than the group of untrained males. However, there was no such drop in accident for female drivers who had received the training.

b) Is the finding above from a correlational or experimental study? What are the two main variables in the result? If it's an experimental study, what is its design?

Here's a final research result:

However, unlike other driving skills, hazard perception has been empirically linked to crash risk.

c) Is the finding above from a correlational or experimental study? What are the two main variables in the result? If it's an experimental study, what is its design?

a) This is a correlational study, and the two measured variables are driver experience (or driver age) and how far ahead drivers train their eyes while driving.

b) This is an experimental study. It appears to be a post-test only design. The independent variable is whether drivers received the 17 minute training or whether they received no training. The dependent variable is accident rate. This study had a participant variable, gender. You read that the training affected males but not females. Therefore, you could also consider this a factorial design (IVxPV) design with an interaction.

c) This is a correlational study, and the two measured variables are skill at hazard perception and crash risk.

12/10/2016

The researchers asked students if this element probably linked to real news or fake news.What's the clue that tells you this story is probably not "real news"?

Fake news is in the (real) news lately. Whether you're looking at Facebook, Buzzfeed, or your online newspaper, companies may try to clickbait you into reading a story that's false. Companies may want you to read the story so that you'll be exposed to their advertising. Or a political group may want to persuade you of an extreme opinion. In some recent cases, people have read fake news stories, believed them, and then acted according to what they thought was true (here's an example).

How often do people mistake fake news for real news?

A team at Stanford University recently attempted to measure the problem in a large sample of high school students. The results of their study were summarized by the Wall Street Journal. The journalist from the WSJ reported the following:

...82% of middle-schoolers couldn’t distinguish between an ad labeled “sponsored content” and a real news story on a website, according to a Stanford University study of 7,804 students from middle school through college. The study, set for release Tuesday, is the biggest so far on how teens evaluate information they find online.

The study apparently showed students several examples, asking them for each one if it was a real story or fake news. You'll see an example of one of their study's stimuli in the photo to the left. You can see the other samples in the full report from Stanford's website (scroll to p. 9).

Here are some more results, reported by the WSJ:

More than two out of three middle-schoolers couldn’t see any valid reason to mistrust a post written by a bank executive arguing that young adults need more financial-planning help. And nearly four in 10 high-school students believed, based on the headline, that a photo of deformed daisies on a photo-sharing site provided strong evidence of toxic conditions near the Fukushima Daiichi nuclear plant in Japan, even though no source or location was given for the photo.

a) What kind of claim is it to say that "82% of middle-schoolers couldn’t distinguish between an ad labeled “sponsored content” and a real news story" (Frequency, association, or cause?) What is (are) the variable(s) in the claim?

b) In order to claim that "82% of middle schoolers" do something, you'd probably need to be sure that the study included a generalizable sample of middle schoolers. What are some ways the researchers could have obtained an externally valid sample?

c) For a frequency claim like this one, construct validity is also important. The construct validity of the Stanford study seems excellent, because the researchers asked students questions about realistic-looking mockups of online content. Reading back through the green quotes above, you'll see three different ways they measured the variable, "knowing when news is fake." What are the three ways?

I can't help but point out that in your research methods class, you will learn several media literacy skills. You're learning that journalists might not always get the details of a scientific study right--they might not even read the original article! Journalists might slap a causal claim on a correlational study. Or they might write a sensational study about a single study without reviewing the entire literature on a topic. Being a good consumer of information means you'll be able to critically evaluate media stories about science (and other topics, too).

Boys and girls send about the same number of texts every day, but girls are more likely to become compulsive texters.

Teenage girls who compulsively text see a steeper decline in their grades than their compulsive male counterparts.

a) How do you think the researchers decided who was a "compulsive texter"? If you were conducting this research, how would you conceptually define this variable? How would you operationally define this variable?

b) Sketch two simple graphs, one of each result: "boys and girls send about the same number of texts every day, but girls are more likely to become compulsive texters."

c) Now, sketch a small moderator table, similar to Table 8.6, that depicts: "Teenage girls who compulsively text see a steeper decline in their grades than their compulsive male counterparts." What is the bivariate relationship they are focusing on? What is the moderator variable?

d) Write a sentence of this form: ____ moderates the relationship between ______ and ______.

e) Why do you think gender moderates the relationship between texting and grades? Note--you might be tempted to say that gender is a moderator because girls text more or because girls get worse grades. But a moderator is not about the absolute level of texting (or the absolute level of grades); a moderator changes the relationship between the two. You should be thinking about what makes girls' grades more vulnerable to texting interruptions. What are your theories? (Click on the story to find out the researchers' theory)

Now, here's another moderator, this time with the researchers' explanation::

Compulsive texting also appears to affect girls' mental health more than boys,' perhaps because girls are more prone to text about negative feelings and to ruminate on those feelings.

f) Sketch another small moderator table, similar to Table 8.6, that depicts this relationship. Write a sentence of this form: ____ moderates the relationship between ______ and ______.

g) Does the researchers' explanation for the moderator make sense to you? Why or why not?

If you’re a research methods instructor or student and would like us to consider your guest post for everydayresearchmethods.com, please contact Dr. Morling. If, as an instructor, you write your own critical thinking questions to accompany the entry, we will credit you as a guest blogger.