QRock 100.7 was one of several news outlets that had fun describing this study for its readers.

Let's find out what kind of study was conducted to test the claim. The Q100.7 journalist wrote:

A new study found loud music makes us more likely to order unhealthy food when we’re dining out. A new study in Sweden found loud music in restaurants makes us more likely to choose unhealthy menu options. And we’re more likely to go with something healthy like a salad when the music ISN’T so loud.

Researchers went to a café and played music at different decibel levels to see how it affected what people ordered. Either 55 decibels, which is like background chatter or the hum from a refrigerator . . . or 70 decibels, which is closer to a vacuum cleaner.

And when they cranked it to up 70, people were 20% more likely to order something unhealthy, like a burger and fries.

They did it over the course of several days and kept getting the same results. So the study seems pretty legit.

a) OK, go: What seems to be the independent variable in this study? What were its levels? How was it operationalized?

b) What seems to be the dependent variable? How was it operationalized? Think specifically about how they might have operationalized the concept "unhealthy."

c) Do you think this study counts as an experiment or a quasi-experiment? Explain your answer.

d) This study can be called a "field study" or perhaps a "field experiment". Why?

e) To what extent can this study support the claim that loud music makes you eat bad food? Apply covariance, temporal precedence, and internal validity to your response.

f) If you were manipulating the loudness of the music for a study like this, how might you do so in order to ensure it was the music, and not other restaurant factors, were responsible for the increase in ordering "unhealthy" food?

g) The Q100.7 journalist argues that the study seems "pretty legit." What do you think the journalist meant by this phrase?

h) The study on food and music volume are summarized in an open-access conference abstract, published here. You might be surprised to read, contrary to the journalist's report, that the field study was conducted on only two days--with one day at 50db and the other at 70 db. How does this change your thoughts about the study?

g) Conference presentations are not quite the same as peer-reviewed journal publications. Take a moment (and use your PSYCinfo skills) to decide if the authors, Biswas, Lund, and Szocs, have published this work yet in a peer reviewed journal. Why might journalists choose to cover a story that has only been presented at a conference instead of peer-reviewed? Is this a good practice in general?

04/20/2018

The study found an estimated 12% higher rate of fatal accidents after 4:20pm on April 20. Credit: Lars Hagberg/AFP/Getty Images

Here's a study that took advantage of "4-20", an unofficial holiday which people celebrate by holding pot-smoking parties starting at 4:20pm. Here's how the quasi-experiment was described in a New York Times story:

Researchers used 25 years of data on car crashes in the United States in which at least one person died. They compared the number of fatal accidents between 4:20 p.m. and midnight on April 20 each year with accidents during the same hours one week before and one week after that date.

a) What are the "independent" and dependent variables in this study? (And why did I put independent variable in quotes?)

Here's how the journalist described the results:

Before 4:20 p.m. there was no difference between the number of fatalities on April 20 and the number on the nearby dates. But from 4:20 p.m. to midnight, there was a 12 percent increased risk of a fatal car crash on April 20 compared with the control dates.

b) Of the four quasi experimental designs, which seems to be the best fit: Non-equivalent control group posttest only? Non-equivalent control group pretest-posttest? Interrupted time series design? Non-equivalent control group posttest-only design?

c) Sketch a graph of the results described.

d) The Times reported that "The increased risk was particularly large in drivers 20 and younger." Why might the researchers have included this detail?

e) The Times's headlineread, "Marijuana Use Tied to Fatal Car Crashes". What kind of claim is this? (Frequency, Association, or Cause?)

f) Towhat extent can these results support a causal claim about marijuana causing crashes? Apply the three causal criteria to this design and results.

04/10/2018

Legalizing marijuana is associated with lower rates of opioid prescriptions in those U.S. states. Photo: Gina Kelly/Alamy Stock Photo

Opioid addition is a major health crisis in the United States. Deaths from overdose increased dramatically in the last 5 years. Opioid addiction sometimes starts when a person in pain is prescribed legal opioid drugs by a physician. Opioid prescriptions can also be sold illegally. For these reasons, opioid prescription rates are an indicator of opioid abuse in a particular region.

Some public health researchers have investigated whether legalizing marijuana can reduce rates of opioid use and abuse. Marijuana is an alternative for controlling chronic pain that, according to many experts, has a lower addiction risk. Recently, researchers published two studies, both with quasi-experimental designs, that tested whether legalized marijuana could lower the rates of opioid prescriptions. Like many quasi-experiments, the researchers took advantage of a real-world situation: Some U.S. states have legalized marijuana and other states have not.

One looked at trends in opioid prescribing under Medicaid, which covers low-income adults, between 2011 and 2016. It compared the states where [medical] marijuana laws took effect versus states without such laws....

Results showed that laws that let people use marijuana to treat specific medical conditions were associated with about a 6 percent lower rate [over the years studied] of opioid prescribing for pain. That's about 39 fewer prescriptions per 1,000 people using Medicaid.

And when states with such a law went on to also allow recreational marijuana use by adults, there was an additional drop averaging about 6 percent.

Questions:

a) What is the "independent" variable in this quasi-experiment? What is the dependent variable? Was the independent variable independent groups or within groups?

b) What makes this a quasi-independent variable?

c) Of the four quasi experimental designs, which seems to be the best fit: Non-equivalent control group posttest only? Non-equivalent control group pretest-posttest? Interrupted time series design? Non-equivalent control group posttest-only design?

d) How might you graph the results described above?

e) To what extent can these data support the causal claim that "legalizing marijuana, either for medical use or recreational use, can lower the rates of opioid prescriptions in the Medicaid system"?

a) The independent variable was whether a state had legalized marijuana or not. It was independent groups (states either had, or had not, legalized the drug). The dependent variable was the number of opioid prescription rates through Medicaid. Another variable, somewhat difficult to discern from the journalist's description, was year of study (from 2011 to 2016)

b) This IV was not manipulated/controlled by the experimenter. The researcher did not decide which states could legalize marijuana or not.

c) This is probably best characterized as a non-equivalent control group, pretest-posttest design. There were two types of states (legalized and not) and one main outcome variable: opioid prescriptions. The prescription rate was compared over time (from 2011 to 2016), making it pretest-posttest.

d) Your y-axis should have "opioid prescriptions" and the x-axis should include the years 2011 to 2016. You could then have "States with legalization" and "States without legalization" as two different colored lines.

e) The results of the study show covariance (States with legalized marijuana had lower opioid prescriptions). The fact that they compared opioid prescriptions over time (2011 to 2016) suggest that the design is able to establish temporal precedence. Presumably (although this is not clear from the articles), 2011 represents a year before many of the marijuana laws took effect and 2016 data occurred after the laws had been active. As for internal validity, it's possible that states that legalize are different in systematic ways than states that do not. For example, states that legalize marijuana are more likely to be in the North and West, have lower poverty rates, and so on. However, the pretest-posttest design, in which they studied the "drop in opioid prescriptions over time" rather than "overall rate of opioid prescriptions" helps minimize some of these concerns. As with most quasi-experiments, causation is not a slam-dunk, because the experimenter does not have full control over the independent variable.

02/10/2018

People in studies who spend more time socializing, rather than time on their phones, are generally happier. Photo: Maria Taglienti-Molinari/Getty Images

Are smartphones making young people lonely, anxious and depressed? Are teens spending time on phones instead of dating, driving, or drinking? That's the argument of a new data-based book by psychologist Jean Twenge.

Several graphs presented in the CNN interview depict patterns of teenage behavior over time, from nationally representative surveys of youth. The graphs show how various healthy activities such as "hanging out with friends" dropped starting in 2012. Twenge's explanation is that 2012 is the first year that more than 50% of Americans owned smartphones. Twenge argues that instead of going out with friends, driving, or being independent, today's teenagers are staying home and connecting with friends only via Snapchat and Instagram.

a) The graph in the CNN story (see minute 1:50 to 2:05; and also here) shows several trends over time. These figures come from a study that is quasi-experimental. Which type of quasi-experiment does this seem to be?

b) Take a look at the "More likely to feel lonely" figure, at minute 2:02. (again, you can also see it here, by scrolling down) The change in loneliness after 2007 has been described as "dramatic" and "precipitous". What do you think? Specifically, look at the y-axis of the graph. How dramatic would the data look if the axis ranged from, say, 0 to 100?

c) Twenge's argument is that the advent of cell phones in 2012 is responsible for decreased social contact and increased loneliness, anxiety, and depression of youth. What might be some plausible alternative explanations for the pattern, other than cell phones? (that is, what are some internal validity threats?)

The data above are longitudinal data, collected over time. Twenge's research has also included correlational studies collected in one group of teenagers at a single point in time. This report in the Washington Post presents the results of an empirical study that found, among other things, that teens who spent more time doing Internet, texting, computer games, or social media were lower in happiness.

d) Scroll down to the graph created by the Washington Post, which is titled "What makes teens happy?" You'll see that each bar represents a correlation between one use of time and teen happiness. What does a gray bar, or negative correlation mean (For example, what does the -0.11 correlation mean for Internet? Sketch a little scatterplot of this correlation.

e) In the same graph, what does a blue bar, or positive correlation mean (For example, what does the 0.14 correlation mean for Sports or exercise?) Sketch a little scatterplot of this correlation.

f) Select the correlation mentioned in d.Does this correlation, on its own, allow us to conclude that time on the Internet causes lower levels of happiness? What third variables might be responsible for this negative correlation? What about temporal precedence of the two variables?

g) How might you describe the effect size of these correlations--are they weak, moderate or strong? How do you know?

h) Consider both the longitudinal data (in Questions a, b, and c) and the correlational data (in questions d-g). Both types of studies support the same conclusion, which make them an example of "Pattern and Parsimony." Explain why this is the case, in your own words.

Instructors: You and your students might also be interested in a curvilinear relationship described in the same Post piece:

The report’s findings were not all dire: Teenagers who get a small amount of exposure to screen time, between one and five hours a week, are happier than those who get none at all. The least happy ones were those who used screens for 20 or more hours a week.

11/20/2017

The sun sets in Amarillo, TX an hour later than it does in Huntsville, AL though they are on the same time zone. Amarillo residents get less sleep and earn more money: Is there a causal connection? Photo: Creativeedits/Wikimedia Common

Sleep is an essential human function and getting more sleep is associated with improved mood, cognitive performance, and physical performance. Therefore, it might make sense that sleep would improve people's productivity and ability to earn money. That's the topic of a Freakonomics episode on the "Economics of Sleep." You can read the transcript or listen to the 45 minute episode here. (The section I focus on starts around minute 10.)

Freakonomics' hosts interviewed a set of economists (including Matthew Gibson, Jeff Shrader, Dan Hamermesh, and Jeff Biddle) about their research on sleep, work hours, and income. The economists mentioned that, in order to establish a causal link between sleep and income:

What we need is something like an experiment for sleep. Almost as though we go out in the United States and force people to sleep different amounts and then watch what the outcome is on their wages.

While it is theoretically possible to conduct such an experiment, it is practically difficult to assign people to different sleep conditions for a long enough period of time to notice an impact on their wages. So the economists took an alternative path and used quasi-experimental data. In a creative twist, they compared wages at two ends of a single American time zone. The example they gave is Huntsville, AL and Amarillo, TX. Here's why. Gibson stated:

It turns out that ever since we’ve put time zones into place, we’ve basically been running just that sort of giant experiment on everyone in America.

The story continued. You'll see the transcript version quoted below:

Consider two places like Huntsville, Alabama — which is near the eastern edge of the Central Time Zone — and Amarillo, Texas, near the western edge of the Central zone. [...]

...even though Amarillo and Huntsville share a time zone, the sun sets about an hour later in Amarillo, according to the clock, and since the two cities are at roughly the same latitude as well, they get roughly the same amount of daylight too.

So you’ve got two cities on either end of a time zone, roughly the same size — just under 200,000 people each — where, according to the clock time, sunset is an hour apart. Now, what good is that to a pair of economists interested in sleep research?

GIBSON: It turns out that the human body, our sleep cycle responds more strongly to the sun than it does to the clock. People who live in Huntsville and experience this earlier sunset go to bed earlier.

GIBSON: If we plot the average bedtime for people as a function of how far east they are within a time zone, we see this very nice, clean nice straight line with earlier bedtime for people at the more eastern location.

But since Huntsville and Amarillo are in the same time zone, people start work at roughly the same time, which means alarm clocks go off at roughly the same time.

GIBSON: That means if you go to bed earlier in Huntsville, you sleep longer.

The economists didn't use only Huntsville and Amarillo--they also conducted multiple comparisons of cities around the U.S. that were similarly on each end of a single time zone. Using "city of residence" as their quasi-experimental operationalization of "amount of sleep", the economists were ready to report the results for wages:

So now Gibson and Shrader plugged in wage data for Huntsville vs. Amarillo and other pairs of cities that had a similar sleep gap.

GIBSON: We find that permanently increasing sleep by an hour per week for everybody in a city, increases the wages in that location by about 4.5 percent.

Four and a half percent — that’s a pretty good payout for just one extra hour of sleep per week. If you get an extra hour per night, Gibson and Shrader discovered — here, let me quote you their paper: “Our main result is that sleeping one extra hour per night on average increases wages by 16%, highlighting the importance of restedness to human productivity.”

Questions:

a) What is the independent variable in this time zone and wages study? What is the dependent variable?

b) Is the IV independent groups or within groups?

c) Which of the four quasi-experimental designs is this? Non-equivalent control group posttest only, Non-equivalent control group pretest-posttest, Interrupted time series, or Non-equivalent control group interrupted time series?

d) The economists asserted, "sleeping one extra hour per night on average increases wages by 16%" (italics added). What do you think? Can their study support this claim? Apply the three causal rules, especially taking note of internal validity issues that this study might have.

e) If you consider only one pair of cities, there are multiple alternative explanations, besides sleep, that can account for wage differences. Name two or three such threats (considering Huntsville and Amarillo as an example). Now consider, how might many of these internal validity threats be reduced by conducting the same analysis over many other city pairs?

f) This Freakonomics episode was aired in 2015, but the study (about time zones) they reviewed is not yet published. What do you think about that?

Answers to selected questions

a) The IV is "Hours of sleep" (but you could also call it "location on the time zone: East or West") and the DV is "Wages".

b) The IV is independent-groups.

c) Non-equivalent control group posttest only.

d & e) The results of the study support covariance: People in cities in the Eastern portion of time zones get more sleep and have higher wages than people in the Western portions. Temporal precedence is unclear, I think: Because the data were collected at the same time, it's not clear if the timezone came first, leading to more sleep and higher wages, or if people began to earn higher wages first, and then systematically moved Eastward. (However, the second direction certainly seems less plausible than the first.)

As for internal validity, if we consider only the city pair of Huntsville and Amarillo, we could come up with several alternative explanations. The two cities have different historical trajectories and different ethnic diversities; they are in two different states that have different fiscal policies and industry bases. Perhaps Amarillo has poorer wages in general and people are losing out on sleep there because they are working more than one job. However, these internal validity threats become less of an issue when you consider multiple pairs of cities. It is less plausible that internal validity threats that apply to one city pair would also, coincidentally, apply to all the other city pairs that are at opposite ends of a time zone.

Even though the method is fairly strong, psychologists would be unlikely to make a strong causal claim simply from quasi-experimental data like these, because the independent variable is not truly manipulated. Nevertheless, the method and results of this quasi-experiment are certainly consistent with the argument that getting more sleep may be a factor in earning higher wages.

08/10/2017

To what extent does the evidence support a causal influence of vacations on happiness and stress? Photo: Syda Productions/Shutterstock

Here are some quasi-experimental and correlational studies on vacations, just in time for the end of summer. The APS website describes a few studies consistent with the argument that vacations can be good for your mental health. Here's one study by researchers Sabine Sonnentag and Jana Kühnel:

The researchers surveyed 131 teachers before and after a two week break from school.

First, they had the teachers complete a measure of exhaustion—how emotionally drained and burned out they felt the day before heading out for vacation. The teachers then completed weekly surveys on how engaged they were with their work, relaxed, and stressed they felt four weeks after returning from vacation.

As predicted, the results indicated that vacationing had a beneficial effect. Not only did the teachers report feeling less tired and emotionally burned out, they also reported feeling more engaged and positive about their work.

a) This is a quasi-experiment. What is the study's "independent" variable? What is/are its dependent variable(s)?

b) Would you call the design a non-equivalent groups posttest only? non-equivalent groups pretest/posttest? Interrupted time series? Or non-equivalent groups interrupted time series?

c) Consider the 12 internal validity threats in Table 11.1. Which threats can this study rule out? Which threats might still apply?

d) Sketch a graph of the results of the study, incorporating this (more negative) message:

But, these benefits were fairly short-lived, particularly for those teachers who came back to especially difficult students and heavy workloads. Within four weeks, the vacation’s positive benefits had faded and teachers were back to their initial levels of stress and emotional exhaustion.

The article also suggests that when it comes to spending money, money spent on vacations is associated with more happiness than money spent on material goods.

...psychological scientists Amit Kumar and Thomas Gilovich of Cornell University and Matthew Killingsworth of University of California, San Francisco tracked moment-to-moment data from 2,266 adults as part of a large-scale experience-sampling project. Participants received notifications from the researchers on their iPhones at random times throughout the day.

Comparing data from individual participants across different times, Gilovich and colleagues found that people were happier at times when they were thinking about a future experiential purchase, like a ski trip, than they were at times when they weren’t thinking about a purchase at all. There was no relative increase or decrease in happiness when people were thinking about a future material purchase.

e) The above study is a correlational one, with a twist. The researchers computed a correlation for each individual person, using "experience" as the unit of analysis. Given the results described above, what might a bar graph depict for a typical person in this study? (what would be on each axis, and what would the results pattern depict?)

10/20/2016

This Ferguson, MO police officer is wearing a body camera. How would we know if these cameras change behavior? St. Louis Post-Dispatch/Getty Images

This example of a quasi-experiment in the news gives us a lot to think about. The headline reads, "New research shows one big change when cops wear cameras." The story introduced research on seven police precincts in the US and the UK. Here's how the journalist introduced the work:

Cameras worn on police uniforms have been lauded as a possible solution to many of the problems facing officers in the line of duty, from violence against law enforcement to the unnecessary use of force.

The report continues:

Researchers used complaints against police as a proxy for the effect of the cameras, hypothesizing that one major reason for complaints is that cops behaved in a negative, avoidable way. (There are other reasons for complaints, the researchers acknowledge, given the emotionally charged nature of many interactions with police.)

Compared to the previous year when cameras were not worn, complaints across the seven regions fell by 98% over the 12 months of the experiment. The study encompassed nearly 1.5 million officer hours across more than 4,000 shifts.

The news article contained a bar graph, depicting that in the year before the cameras were used, the total number of police complaints was almost 1600. In the year after the cameras were used, the total complaints was fewer than 200.

a) This research used a quasi experimental design. What is the IV? What is the DV? Is the IV an independent groups variable or a within-groups variable?

b) What kind of design is this--nonequivalent control group posttest only, nonequivalent control group pretest-posttest, interrupted time series, or nonequivalent control group interrupted time series?

c) The authors of the report seem convinced that body cameras are the reason for the drop in police complaints seen from one year to the next. But what other possible explanations could there be? What threats to internal validity might apply here, and which could you rule out? (Consult Table 11.1 or Chapter 13.)

d) In a quasi-experiment, we use both the design and the results to see how close we can get to supporting a causal statement. What changes to this study's design might help you be more convinced that the study can support the claim that "body cameras reduce police complaints"?

08/10/2016

In this podium photo of the men's 400M Freestyle, does the smile of the silver medalist, China's Sun Yang, look happier or less happy than that of the bronze medalist, Italy's Gabriele Detti? Note: The happy gold medalist is Australia's Mack Horton. Photo Credit: Matt Slocum AP Photo

In honor of the 2016 Rio Olympics that are happening right now, I decided to bring up a classic study on the emotional reactions of Olympic medalists. The study was covered by Scientific American a few years ago. The study showed a counterintuitive result:

In athletic competitions there are clear winners and losers. In the Olympics, the gold medalist won the competition; the silver medalist has a slightly lower achievement, and the bronze medalist a lower achievement still. One might expect that their happiness with their performance would mirror this order, with the gold medalist being happiest, followed by the silver medalists, and then the bronze.

Psychologists Victoria Medvec and Thomas Gilovich of Cornell University, and Scott Madey of the University of Toledo think that this phenomenon can be explained by counterfactual thinking. This means that people compare their objective achievements to what “might have been.”

The most obvious counterfactual thought for the silver medalist might be to focus on almost winning gold. She would focus on the difference between coming in first place, and any other outcome. The bronze medalist, however, might focus their counterfactual thoughts downward towards fourth place. She would focus on almost not winning a medal at all.

It is because of this incongruous comparison that the bronze medalist, who is objectively worse off, would be more pleased with herself, and happier with her achievement, than the silver medalist.

The study behind this story was a quasi-experiment. As you read the journalist's study description, decide which quasi-experimental design the researchers used:

To scientifically investigate this question, the researchers took video footage of the 1992 summer Olympics in Barcelona, Spain. Specifically, they recorded the medal ceremonies and showed them to undergraduate students, as well as footage from the athletic competitions immediately following announcements of the winners. They asked them to rate the happiness displayed by each of the medalists on a 10-point scale, with 1 being “agony” and 10 being “ecstasy.”

On average, the silver medalists scored a 4.8, and the bronze medalists scored a 7.1 immediately following the announcement. Later in the day, at the medal ceremony, the silver medalists scored a 4.3 on the happiness scale, while the bronze medalists scored 5.7. Statistical analyses proved that both immediately after winning, as well as later at the medal ceremony, bronze medalists were visibly happier than the silver medalists.

Here are some questions about the study:

a) What is the independent variable in this design? What is the dependent variable? (Hint: They operationalized the dependent variable in two ways).

b) Is the independent variable a between-groups or within-groups IV? Why is the IV considered a quasi-experimental independent variable?

c) Which quasi-experimental design appears to be conducted here? Your choices are: non-equivalent control group posttest only, non-equivalent control group preptest-posttest, interrupted time-series, or non-equivalent control group interrupted time-series.

d) The researchers provide enough information for you to create a graph of the results. Try it! (Which DV will you pick to graph?) Why do you think the journalist didn't provide the happiness values for the gold medalists?

e) Quasi-experimental studies take advantage of real-world situations, but they cannot establish full experimental control. The researchers are unable to randomly assign people to win silver or bronze medals. Therefore, what confounds might be present in this design? How might the researchers have controlled for such confounds in their study?

09/10/2015

Why might it be difficult to study the long-term impact of quality preschool programs? Photo: Shutterstock

An NPR feature story recently covered Tulsa, Oklahoma's free preschool program, which began about 10 years ago. Like some other cities around the U.S., Tulsa introduced a free pre-K program, with the goal of helping children from all economic backgrounds get ready to succeed in school. Developmental psychologist Deborah Phillips of Georgetown University has tracked children who participated in the Tulsa program, and is especially interested in how the kids are doing as they start high school. According to the story:

"These children did show huge gains in early math and early literacy skills," says Deborah Phillips, a developmental psychologist at Georgetown University who has been overseeing the study. "They were more likely to be engaged in school, less timid in the classroom and more attentive."

Phillips says preschool gave them a good, strong boost into elementary school, Today, as eighth-graders, says Phillips, most of these kids are still doing really well.

Phillips didn't just look at grades and test scores. Her team looked at student mobility, whether kids were in advanced or special education classes. They examined retention rates, absenteeism, and they even surveyed students' attitudes about school.

Researchers then compared these eighth-graders to a large sample of Tulsa eighth- and seventh-graders who did not attend preschool. They found that those students were not doing nearly as well.

a) What kind of study is this? What are its independent and dependent variables? This is one of those studies that has multiple dependent variables. Make sure you include all of them in your answer!

Although Dr. Phillips' conclusion is that Tulsa's preschool improved kids' abilities to do well in school, the NPR piece interviewed some folks who objected. Here's one example:

....Russ Whitehurst, senior fellow with the Center on Children and Families at the Brookings Institution, a Washington, D.C., think tank....says he's looked closely at the Tulsa study and takes issue with the way researchers compared kids who were in the program with those who were not.

"What Dr. Phillips and her colleagues have done is scrounge up a bunch of kids who for whatever reason — and they don't know that reason — did not attend pre-K at all," he says.

"We don't know if they were similar to the kids who went to pre-K," Whitehurst adds. "That's why the design [raises] question marks about the ability to conclude that pre-K had the affects attributed by Dr. Phillips."

b. What kind of criticism is Dr. Whitehurst raising about this study? (What terms from the text can you apply here?) What, if anything could have been done to prevent the problem?What are the practical and ethical limitations to such research?

Here's another example of a person who is skeptical of the study's results:

...[high school] principal Nanette Coleman ... says she doesn't know how many of her ninth-graders this year attended Tulsa's preschool program, and has not seen the research. But she has a hard time believing that preschool, no matter how good the program, is going to have an impact on a student 10 years later.

"They're going to struggle when they get to me because there are so many outliers that can have a student not be successful," she says. "Let me be clear," she adds, "I've never made a direct linkage between a pre-K program and their high school success."

c) On what source of information (from Chapter 2) is Principal Coleman basing her beliefs? What are some problems with that source of information?

07/10/2015

There's a fun interactive datagraphic on gallup.com's website. It's called "State of the States." You can select a polling variable, such as "overall well-being," "support for Obama," or "religiosity," and it will show you how each U.S. state scores on that variable.

Feel free to take a minute to play with the interactive right now. (I'll wait.)

I've pasted a screen shot from the "well-being" results below. Take a look at it, and consider the questions that follow.

a) In the figure above, the variable I selected was "Well being." The thermometer below indicates that darker states are higher in well-being than lighter states. Using that rule, which states are the highest in well-being? Which are the lowest?

b) You might notice that South Dakota is higher in well-being than North Dakota--their shades of green are noticeably different. In fact, you might even imagine a news story in which a reporter suggests that South Dakotans are "happier." But I want you to consider the effect size of the difference. About how much happier are South Dakotans, according to the scale?

Now consider the next screen map (below). This one shows religiosity, indicating the percentage of state residents who consider themselves "Very religious":

c) As before, the thermometer below indicates that darker states are higher in saying they are "very religious" compared to lighter states. Using that rule, what states are the highest in religiosity? Which are the lowest?

d) Take a look at the scale for this variable--what do you notice about the range for Religiosity compared to the range for well-being?

e) On the map, the states of Utah and Idaho are about the same shades of green as South and North Dakota were on the well-being variable. Indeed, the shades of green for Utah and Idaho are noticeably different. In fact, you might now imagine a news story in which a reporter suggests that Utahans are "more religious." Once again, I want you to consider the effect size of the difference. How much more religious are Utahans, according to the scale?

e) What do you think? How is Gallup using these shades of green in this interactive data map? Is their use misleading? If so, what might be better?

If you’re a research methods instructor or student and would like us to consider your guest post for everydayresearchmethods.com, please contact Dr. Morling. If, as an instructor, you write your own critical thinking questions to accompany the entry, we will credit you as a guest blogger.