...research suggests that Tetris can ease us through periods of anxiety by getting us to a blissfully engrossed mental state that psychologists call "flow."

"The state of flow is one where you're completely absorbed or engaged in some kind of activity," Sweeny explains. "You lose your self-awareness, and time is just flying by."

Here's more on the detail:

Sweeny and her collaborators gathered a group of more than 300 college students and told them their peers would be evaluating how attractive they were. "I know, it's kind of cruel, but we found it's a really effective way to get people stressed out," Sweeny says. While the participants awaited their attractiveness scores, the researchers had them play Tetris.

Some played a painfully slow, easy version of the game — which bored them. Some played an extremely challenging, fast version — which frustrated them. And everyone else played the classic version, which adapts to each player's individual skill level and gets them into that state of flow.[People were randomly assigned to the three groups.]

In the end, everyone experienced a degree of worry. But the third group reported slightly higher levels of positive emotions (on average, about a quarter of a point higher on a five-point scale) and slightly lower levels of negative emotions (half a point lower on a five-point scale).

"It wasn't a huge difference, but we think it's noticeable," Sweeny says. "And over time, it can add up."

Questions:

a) In this study, they decided to manipulate the conceptual variable, "degree of flow." How did they operationalize this variable?

b) What were the dependent variables in this study? (there seem to be two DVs here)

c) What was the independent variable? What were its levels?

d) Does this seem to be an experiment or a correlational study? How do you know?

e) Sketch a graph of the results.

f) The journalist mentions details about the results (e.g., "about a quarter of a point higher on a five-point scale" and "half a point lower on a five-point scale"). Which aspect of statistical validity is being discussed here?

g) What questions would you ask to decide if this study was internally valid? Which of the internal validity threats in Table 11.1 could you rule out? Which could you ask about?

h) What about the external validity of this study? How might you see if this effect might generalize to other flow-related activities (other than Tetris)?

“We know that in humans there’s a strong correlation between cognitive health and social connections, but we don’t know if it’s having a group of friends that’s protecting people or if it’s that people with declining brain health withdraw from their human connections,” [Study researcher] Kirby said.

[The n]ew research ...found that mice housed in groups had better memories and healthier brains than animals that lived in pairs.

a) Before reading on, reflect: Why would a researcher probably need an animal model to test this question experimentally?

Here's some more detail about the experiment:

Some mice lived in pairs, which Kirby refers to as the “old-couple model.” Others were housed for three months with six other roommates, a scenario that allows for “pretty complex interactions.”

The mice were 15 months to 18 months old during the experiment – a time of significant natural memory decline in the rodent lifespan.

In tests of memory, the group-housed mice fared better.

One test challenged the mice to recognize that a toy, such as a plastic car, had moved to a new location. ...“With the pair-housed mice, they had no idea that the object had moved. The group-housed mice were much better at remembering what they’d seen before and went to the toy in a new location, ignoring another toy that had not moved,” Kirby said.

In another common maze-based memory test, mice are placed on a well-lit round table with holes, some of which lead to escape hatches. Their natural tendency is to look for the dark, unexposed and “safe” escape routes.

The “couples” mice didn’t get faster at the test when it was repeated over the course of a day.“But over the course of many days, they developed a serial-searching strategy where they checked every hole as quickly as possible. It’d be like walking as quickly as possible through each row of a parking lot to look for your car rather than trying to remember where your car actually is and walk to that spot,” Kirby said.

The group-housed mice improved with each trial, though. “They seemed to try to memorize where the escape hatches are and walk to them directly, which is the behavior we see in healthy young mice,” Kirby said. “And that tells us that they’re using the hippocampus, an area of the brain that is really important for good memory function.”

b) What was the independent variable in this study? How was it operationalized?

c) What was the dependent variable? What were the two ways it was operationalized?

d) How does this experiment help us decide which comes first--social life or better memory? (note: This is temporal precedence!)

e) Do you think the journalist is justified in generalizing this study's results from mice to older adult humans? Why or why not?

f) Chapter 3 explains how internal validity and external validity are often in a trade-off. Describe how this study with mice illustrates that trade-off.

[Researcher Eugenia South] and her colleagues wanted to see if the simple task of cleaning and greening these empty lots could have an impact on residents' mental health and well-being. So, they randomly selected 541 vacant lots [in the city of Philadelphia] and divided them into three groups.

They collaborated with the Pennsylvania Horticultural Society for the cleanup work.

The lots in one group were left untouched — this was the control group. The Pennsylvania Horticultural Society cleaned up the lots in a second group, removing the trash. And for a third group, they cleaned up the trash and existing vegetation, and planted new grass and trees. The researchers called this third set the "vacant lot greening" intervention.

Here's more:

The team surveyed residents living near the lots before and after their trial to assess their mental health and wellbeing. "We used a psychological distress scale that asked people how often they felt nervous, hopeless, depressed, restless, worthless and that everything was an effort," explains South.

The scale alone doesn't diagnose people with mental illness, but a score of 13 or higher suggests a higher prevalence of mental illness in the community, she says.

People living near the newly greened lots felt better. "We found a significant reduction in the amount of people who were feeling depressed," says South.

As one commentator noted:

Previous research has shown that green spaces are associated with better mental health, but this study is "innovative," says Rachel Morello-Frosch, a professor at the department of environmental science, policy and management at the University of California, Berkeley, who wasn't involved in the research.

"To my knowledge, this is the first intervention to test — like you would in a drug trial — by randomly allocating a treatment to see what you see," adds Morello-Frosch.

Questions

a) How do we know that this is an experiment and not a correlational study?

b) What were the Independent and Dependent variables?

c) What was the design: Posttest only? Pretest-posttest? Repeated measures? Or concurrent measures?

d) Sketch a graph of the results described above.

e) Ask at least one question each about this study's construct, internal, external, and statistical validities.

f) Because this article was published in the open-access journal JAMA Network Open anyone can read the paper. Take a look carefully at the tables in the paper (especially Table 2). How strong do the results seem to you? Are the differences between the conditions large? Do you see improvements on all the measured variables, or just on a few of them?

g) Why does the design of this study help support the causal claim that "Replacing vacant lots with green spaces can ease depression...."? (Apply the three causal criteria.)

07/10/2018

Her school achievement later in life can be predicted from her ability to wait for a treat (or by her family's SES). Photo: Manley099/Getty Images

There's a new replication study about the famous "marshmallow study", and it's all over the popular press. You've probably heard of the original research: Kids are asked to sit alone in a room with a single marshmallow (or some other treat they like, such as pretzels). If the child can wait for up to 15 minutes until the experimenter comes back, they receive two marshmallows. But if they eat the first one early, they don't. As part of the original study, kids were tracked over several years. One of the key findings was that the longer children were able to wait at age 4, the better they were doing in school as teenagers. Psychologists have often used this study as an illustration of how self-control is related to important life outcomes.

The press coverage of this year's replication study illustrates at least two things. First, it's a nice example of multiple regression. Second, it's an example of how different media outlets assign catchy--but sometimes erroneous--headlines on the same study.

First, let's talk about the multiple regression piece. Regression analyses often try to understand a core bivariate relationship more fully. In this case, the core relationship they start with is between the two variables, "length of time kids waited at age 4" and "test performance at age 15." Here's how it was described by Payne and Sheeran in the online magazine Behavioral Scientist:

The result? Kids who resisted temptation longer on the marshmallow test had higher achievement later in life. The correlation was in the same direction as in Mischel’s early study. It was statistically significant, like the original study. The correlation was somewhat smaller, and this smaller association is probably the more accurate estimate, because the sample size in the new study was larger than the original. Still, this finding says that observing a child for seven minutes with candy can tell you something remarkable about how well the child is likely to do in high school.

a) Sketch a well-labelled scatterplot of the relationship described above. What direction will the dots slope? Will they be fairly tight to a straight line, or spread out?

b) The writers (Payne and Sheeran) suggest that a larger sample size leads to a more accurate estimate of a correlation. Can you explain why a large sample size might give a more accurate statistical estimate? (Hint: Chapter 8 talks about outliers and sample size--see Figures 8.10 and 8.11.)

Now here's more about the study:

The researchers next added a series of “control variables” using regression analysis. This statistical technique removes whatever factors the control variables and the marshmallow test have in common. These controls included measures of the child’s socioeconomic status, intelligence, personality, and behavior problems. As more and more factors were controlled for, the association between marshmallow waiting and academic achievement as a teenager became nonsignificant.

c) What's proposed above is that social class is a third variable ("C") that might be associated with both waiting time ("A") and school achievement ("B"). Using Figure 8.15. draw this proposal. Think about it, too: Why does it make sense that lower SES might go both with lower waiting time (A)? Why might lower SES go with lower school achievement (B)?

d) Now create a mockup regression table that might fit the pattern of results being described above. Put the DV at the top (what is the DV?), then list the predictor variables underneath, starting with Waiting time at Age 4, and including things like Child's Socioeconomic Status and Intelligence. Which betas should be significant? Which should not?

Basically, here we have a core bivariate relationship (between wait time and later achievement), and then a critic suggests a possible third variable (SES). They used regression to see if the core relationship was still there when the third variable was controlled for. The core relationship went away, suggesting that SES was a third variable that can help explain why kids who wait longer do better in school later on.

Next let's talk about some of the hype around this replication study. The Behavioral Scientist piece (quoted above) is one of the more balanced descriptions. Its headline was, Try to Resist Misinterpreting the Marshmallow Test. It emphasized that the core relationship was replicated. It also explains in some detail why SES is related to self-control, and how the two probably cannot be meaningfully separated--it's a nuanced report. But other press coverage had a doomsday feel:

One person on Twitter even wrote,"The marshmallow/delayed gratification study always felt "wrong" to me - this year it was reported to be hopelessly flawed"

Are these headlines and comments fair? Probably not. As Payne and Sheeran write in Behavioral Scientist,

The problem is that scholars have known for decades that affluence and poverty shape the ability to delay gratification. Writing in 1974, Mischel observed that waiting for the larger reward was not only a trait of the individual but also depended on people’s expectancies and experience. If researchers were unreliable in their promise to return with two marshmallows, anyone would soon learn to seize the moment and eat the treat. He illustrated this with an example of lower-class black residents in Trinidad who fared poorly on the test when it was administered by white people, who had a history of breaking their promises. Following this logic, multiple studies over the years have confirmed that people living in poverty or who experience chaotic futures tend to prefer the sure thing now over waiting for a larger reward that might never come. But if this has been known for years, where is the replication crisis?

b) What foods might be associated with your own cultural identity (or identities?)

Here are some elements of the journalist's story. NPR reported about...

...a recent study in the Journal of Experimental Social Psychology, authored by Jay Van Bavel, social psychologist at New York University and his colleagues. The researchers found that the stronger your sense of social identity, the more you are likely to enjoy the food associated with that identity. The subjects of this study were Southerners and Canadians, two groups with proud food traditions.

The first experiment, containing 103 people, found that the more strongly someone self-identifies as Southern, the more they would expect Southern food to taste good, food like fried catfish or black-eyed peas.

c) In the study above, what are the two variables? Do they seem to be manipulated or measured?

d) Given your answer to question c) is this study really an "experiment"?

e) Can this study (above) support the causal claim that "identity impacts the food you like"? What are some alternative explanations? Hint: Think about temporal precedence and third variable explanations.

Here's the description of a second study:

In a second experiment, containing 151 people, researchers also found that when Southerners were reminded of their Southernness — primed, in psychology speak — their perception of the tastiness of Southern food was even higher. That is, the more Southern a person was feeling at that moment, the better the food tasted [compared to a group who was not primed].

e) What are the two variables in the study above? Were the variables manipulated or measured?

f) Given your answer to question e) is this study really an "experiment"?

g) Can this study support the claim that "identity impacts the food you like"?

They found a similar result when taste-testing with Canadians, finding that Canadian test subjects only preferred the taste of maple syrup over honey in trials when they were first reminded of their Canadian identity.

h) You know the drill: For the study above, what kind of study was is? What are its variables?

i) Challenge question: Can you tell if the independent variable in the Canadian study was manipulated as between groups or within groups?

In sum, it appears that two out of the three studies reviewed by this NPR article were experimental, so they're more likely to support the causal claim about "identity impacting the food you like." The journalist calls attention to this manipulation of identity in this description:

The relationship between identity and food preference is not new. However, the use of priming to induce identity makes this study different from its predecessors.

"Priming is like opening a filing drawer and bringing to your attention all the things that are in the drawer," says Paul Rozin, food psychologist at University of Pennsylvania, who was not involved in the study. "You can't really change peoples' identities in a 15-minute setting, but you can make one of their identities more salient, and that's what they've done in this study."

j) What other ways might you manipulate cultural identity in an experimental design?

Good news! The empirical journal article is open-access here. When you read it, you'll see that the journalist simplified the design of the studies for her article in NPR.

05/10/2018

I'm standing at my desk as I compose this post....could that make my writing go better? Yes, according to an editorial entitled, "Standing up at your desk could make you smarter." The editorial leads with a strong causal claim and then describes three studies, each with a different design. Here's one of the studies:

A study published last week...showed that sedentary behavior is associated with reduced thickness of the medial temporal lobe, which contains the hippocampus, a brain region that is critical to learning and memory.

The researchers asked a group of 35 healthy people, ages 45 to 70, about their activity levels and the average number of hours each day spent sitting and then scanned their brains with M.R.I. They found that the thickness of their medial temporal lobe was inversely correlated with how sedentary they were; the subjects who reported sitting for longer periods had the thinnest medial temporal lobes.

a) What were the two variables in this study? Were they manipulated or measured? Was this a correlational or experimental study?

b) The author writes that the study "showed that sedentary behavior is associated with reduced thickness of the medial temporal lobe." Did he use the correct verb? Why or why not?

Here's a second study described in the editorial:

Intriguingly, you don’t even have to move much to enhance cognition; just standing will do the trick. For example, two groups of subjects were asked to complete a test while either sitting or standing [randomly assigned]. The test — called Stroop — measures selective attention. Participants are presented with conflicting stimuli, like the word “green” printed in blue ink, and asked to name the color. Subjects thinking on their feet beat those who sat by a 32-millisecond margin.

c) What are the two variables in this study? Were they manipulated or measured? Was this a correlational or experimental study?

d) Does this study support the author's claim that "you don't have to move much to enhance cognition; just standing will do the trick"? Why or why not?

e) Bonus: What kind of experiment was being described here? (Posttest only, prettest/posttest, repeated measures, or concurrent measures?) Comment, as well, on the effect size.

It’s also yet another good argument for getting rid of sitting desks in favor of standing desks for most people. For example, one study assigned a group of 34 high school freshmen to a standing desk for 27 weeks. The researchers found significant improvement in executive function and working memory by the end of the study.

f) What are the variables in this study? Were they manipulated or measured?

g) Do you think this study can support a causal claim about standing desks improving executive function and working memory?

The author added the following statement to the third study on high school freshmen:

True, there was no control group of students using a seated desk, but it’s unlikely that this change was a result of brain maturation, given the short study period.

h) What threat to internal validity has the author identified in this statement?

i) What do you think of his evaluation of this threat?

j) Of the three studies presented, which provides the strongest evidence for the claim that "standing up at your desk could make you smarter"? What do you think? On the basis of this evidence, should I keep standing here?

04/20/2018

The study found an estimated 12% higher rate of fatal accidents after 4:20pm on April 20. Credit: Lars Hagberg/AFP/Getty Images

Here's a study that took advantage of "4-20", an unofficial holiday which people celebrate by holding pot-smoking parties starting at 4:20pm. Here's how the quasi-experiment was described in a New York Times story:

Researchers used 25 years of data on car crashes in the United States in which at least one person died. They compared the number of fatal accidents between 4:20 p.m. and midnight on April 20 each year with accidents during the same hours one week before and one week after that date.

a) What are the "independent" and dependent variables in this study? (And why did I put independent variable in quotes?)

Here's how the journalist described the results:

Before 4:20 p.m. there was no difference between the number of fatalities on April 20 and the number on the nearby dates. But from 4:20 p.m. to midnight, there was a 12 percent increased risk of a fatal car crash on April 20 compared with the control dates.

b) Of the four quasi experimental designs, which seems to be the best fit: Non-equivalent control group posttest only? Non-equivalent control group pretest-posttest? Interrupted time series design? Non-equivalent control group posttest-only design?

c) Sketch a graph of the results described.

d) The Times reported that "The increased risk was particularly large in drivers 20 and younger." Why might the researchers have included this detail?

e) The Times's headlineread, "Marijuana Use Tied to Fatal Car Crashes". What kind of claim is this? (Frequency, Association, or Cause?)

f) Towhat extent can these results support a causal claim about marijuana causing crashes? Apply the three causal criteria to this design and results.

02/10/2018

People in studies who spend more time socializing, rather than time on their phones, are generally happier. Photo: Maria Taglienti-Molinari/Getty Images

Are smartphones making young people lonely, anxious and depressed? Are teens spending time on phones instead of dating, driving, or drinking? That's the argument of a new data-based book by psychologist Jean Twenge.

Several graphs presented in the CNN interview depict patterns of teenage behavior over time, from nationally representative surveys of youth. The graphs show how various healthy activities such as "hanging out with friends" dropped starting in 2012. Twenge's explanation is that 2012 is the first year that more than 50% of Americans owned smartphones. Twenge argues that instead of going out with friends, driving, or being independent, today's teenagers are staying home and connecting with friends only via Snapchat and Instagram.

a) The graph in the CNN story (see minute 1:50 to 2:05; and also here) shows several trends over time. These figures come from a study that is quasi-experimental. Which type of quasi-experiment does this seem to be?

b) Take a look at the "More likely to feel lonely" figure, at minute 2:02. (again, you can also see it here, by scrolling down) The change in loneliness after 2007 has been described as "dramatic" and "precipitous". What do you think? Specifically, look at the y-axis of the graph. How dramatic would the data look if the axis ranged from, say, 0 to 100?

c) Twenge's argument is that the advent of cell phones in 2012 is responsible for decreased social contact and increased loneliness, anxiety, and depression of youth. What might be some plausible alternative explanations for the pattern, other than cell phones? (that is, what are some internal validity threats?)

The data above are longitudinal data, collected over time. Twenge's research has also included correlational studies collected in one group of teenagers at a single point in time. This report in the Washington Post presents the results of an empirical study that found, among other things, that teens who spent more time doing Internet, texting, computer games, or social media were lower in happiness.

d) Scroll down to the graph created by the Washington Post, which is titled "What makes teens happy?" You'll see that each bar represents a correlation between one use of time and teen happiness. What does a gray bar, or negative correlation mean (For example, what does the -0.11 correlation mean for Internet? Sketch a little scatterplot of this correlation.

e) In the same graph, what does a blue bar, or positive correlation mean (For example, what does the 0.14 correlation mean for Sports or exercise?) Sketch a little scatterplot of this correlation.

f) Select the correlation mentioned in d.Does this correlation, on its own, allow us to conclude that time on the Internet causes lower levels of happiness? What third variables might be responsible for this negative correlation? What about temporal precedence of the two variables?

g) How might you describe the effect size of these correlations--are they weak, moderate or strong? How do you know?

h) Consider both the longitudinal data (in Questions a, b, and c) and the correlational data (in questions d-g). Both types of studies support the same conclusion, which make them an example of "Pattern and Parsimony." Explain why this is the case, in your own words.

Instructors: You and your students might also be interested in a curvilinear relationship described in the same Post piece:

The report’s findings were not all dire: Teenagers who get a small amount of exposure to screen time, between one and five hours a week, are happier than those who get none at all. The least happy ones were those who used screens for 20 or more hours a week.

01/10/2018

When the general public critiques research, I often hear them say that the samples are "too small." It's true that sample sizes (N) in psychology research should be large. One of the outcomes of the so-called "replication crisis" is that large samples are more and more important in psychology. But why?

A common misconception--held by both students and the general public--is that large samples are important because they ensure external validity. This misconception is incorrect. External validity (that is, the ability to generalize from a sample to a population of interest) is about how a sample has been recruited, not how many people are in it (see Chapter 7, 14). For example, say you recruited a sample of 1000 fans attending the national championship college game. You'd have a pretty large sample, but you couldn't generalize from that sample to college students in the U.S. (for example). In fact, unless the 1000 fans were selected at random from the 70,000 fans at the game, you couldn't even generalize from this sample to "people attending the national championship football game."

If not external validity, why are large samples important? It's about accuracy of our statistical estimates. When estimating values in the population such as means or differences between means, large samples are less likely to be influenced by chance variability. For example, imagine you're estimating the mean height of kindergarteners in your local school. Now imagine that you select 5 kindergarteners at random, one of whom, by chance, turns out to be extremely tall for her age. That tall kindergartener is going to "pull" the mean estimate upwards when combined with only 4 other kids. But what if you select 25 kindergarteners instead? Now the tall kindergartener is going to be balanced out by 24 other scores, and her height will have less influence on the mean estimate.

Below is a pair of animations that illustrate this principle. They come from the data science blog R Explorations. The animation used the program R to run a simulation study over and over and over. First, they created a very large population of scores whose mean was known to be 10.0 and whose standard deviation was known to be 1.0. Then they asked the computer to draw a random sample of size 10, compute the mean of the 10 scores, and plot them. You can watch the samples appear in real time on the animation below. Here, xbar is the sample's mean and s is the sample's standard deviation. The red line represents the mean for each sample as it is drawn:

Questions

a) First, watch the top animation, where N = 10. What do you notice about the movement of the vertical red line representing the mean in the top animation? What is it doing, and what does that represent?

b) Now watch the bottom animation, where N = 1000. What do you notice about the movement of the vertical red line representing the mean in this second animation? What is it doing, and what does that represent?

c) What do you notice about the s values of the two animations? Which animation has a steadier estimate of s?

d) Answer this one only if you've had a statistics course: Which of the two animations will have a smaller standard error? How is the standard error represented in the two animations?

e) Given the behavior of the two animations, explain why a large sample is important for research.

f) Which validity does sample size best address, if not external validity?

g) Let's tie this concept back to the "replication crisis" (or, as some are now calling it, "credibility revolution"*). When a finding in psychology has not replicated in a direct replication study, one reason might be that the original study used a small sample. Another reason might be that the replication study used a small sample. Why might the sample size of a study be linked to its replicability? Explain in your own words.

How do we know that dressing up as Batman works? Let's learn more about the study behind the catchy headline. I'll be quoting from this British Psychological Society summary of it, as well as from the original journal article in the scientific journal Child Development(paywall--only available through University libraries).

The study was conducted to test a theory about self-regulation. All of us--children or adults--have to exercise self-control to make ourselves stick to important (but sometimes boring) tasks. One strategy researchers are examining is "self-distancing," in which people view a situation from a third-person perspective--one more distant and objective--rather than a self-immersed perspective, which can be more emotional and impulsive. The research tests the hypothesis that seeing oneself as "Batman" will engage kids in this self-distanced perspective.

Now for the design of the study. The team of scientists...

recruited 180 kids aged 4 to 6 years and ...asked them to complete a boring, slow but supposedly important ten-minute computer task that involved pressing the space bar whenever they saw a picture of cheese or not pressing anything when the screen showed a cat. The children were encouraged to stay on task, but they were told they could take a break whenever they wanted and go play a game on a nearby iPad.

Some of the children were assigned to a “self-immersed condition”, akin to a control group, and before and during the task were told to reflect on how they were doing, asking themselves “Am I working hard?”. Other children were asked to reflect from a third-person perspective, asking themselves “Is James [insert child’s actual name] working hard?” Finally, the rest of the kids were in the Batman condition, in which they were asked to imagine they were either Batman, Bob The Builder, Rapunzel or Dora the Explorer and to ask themselves “Is Batman [or whichever character they were] working hard?”. Children in this last condition were given a relevant prop to help, such as Batman’s cape.

Here are the results (I've focused on the 4-year olds here):

...those in the Batman condition spent the most time on task (...about 32 per cent...). The children in the self-immersed condition spent the least time on task (...just over 20 per cent...) and those in the third-person condition performed in between.

a) In this study, what is the independent variable? How many levels were in this IV, and what were the levels? Was the IV independent groups or within groups?

b) What was the dependent variable?

c) Sketch a well-labeled line or bar graph of the results.

d) Why do you think the researchers included the condition in which kids were asked to think about themselves in the third person?

e) Notice that almost all of the headlines and twitter comments about this study have focused on Batman. Even the researchers call it "The Batman Effect" Is that accurate?

f) Finally, think about the fact that in the Batman condition, kids not only got to pretend to be a character. They also got to make an important choice about their participation in the study (the choice among the four different options of Batman, Rapunzel, Bob the Builder, and Dora). The kids in the self-immersed and third-person conditions did not make any choices. What kind of problem might this be in the study? (Which one of the four big validities does it address?)

g) Can the study really support the claim that "Pretending to be Batman helps kids stay on task"? Apply the three causal criteria, paying special attention to the point raised in question f), above.

Note to Instructors: If you include the results for the 6 year olds, you can also teach this as an 2x3 IVxPV design, using age (4 vs. 6 year olds) as the participant variable. Here are the full results:

The six-year-olds spent more time on task than the four-year-olds (half the time versus about a quarter of the time). No surprise there. But across age groups, and apparently unrelated to their personal scores on mental control, memory, or empathy, those in the Batman condition spent the most time on task (about 55 per cent for the six-year-olds; about 32 per cent for the four-year-olds). The children in the self-immersed condition spent the least time on task (about 35 per cent of the time for the six-year-olds; just over 20 per cent for the four-year-olds) and those in the third-person condition performed in between.

If you’re a research methods instructor or student and would like us to consider your guest post for everydayresearchmethods.com, please contact Dr. Morling. If, as an instructor, you write your own critical thinking questions to accompany the entry, we will credit you as a guest blogger.