Activities

degree of difficulty: easy , medium , hard , very hard

requires math ()

requires coding ()

data collection ()

my favorites ()

[, ] In the chapter, I was very positive about post-stratification. However, this does not always improve the quality of estimates. Construct a situation where post-stratification can decrease the quality of estimates. (For a hint, see Thomsen (1973).)

[, , ] Design and conduct a non-probability survey on Amazon Mechanical Turk to ask about gun ownership and attitudes toward gun control. So that you can compare your estimates to those derived from a probability sample, please copy the question text and response options directly from a high-quality survey such as those run by the Pew Research Center.

How long does your survey take? How much does it cost? How do the demographics of your sample compare with the demographics of the US population?

What is the raw estimate of gun ownership using your sample?

Correct for the nonrepresentativeness of your sample using post-stratification or some other technique. Now what is the estimate of gun ownership?

How do your estimates compare with the latest estimate from a probability-based sample? What do you think explains the discrepancies, if there are any?

[, , ] Goel and colleagues (2016) administered 49 multiple-choice attitudinal questions drawn from the General Social Survey (GSS) and select surveys by the Pew Research Center to non-probability sample of respondents drawn from Amazon Mechanical Turk. They then adjusted for the non-representativeness of data using model-based post-stratification and compared their adjusted estimates with those from the probability-based GSS and Pew surveys. Conduct the same survey on Amazon Mechanical Turk and try to replicate figure 2a and figure 2b by comparing your adjusted estimates with the estimates from the most recent rounds of the GSS and Pew surveys. (See appendix table A2 for the list of 49 questions.)

Compare and contrast your results with those from Pew and GSS.

Compare and contrast your results with those from the Mechanical Turk survey in Goel, Obeng, and Rothschild (2016).

[, , ] Many studies use self-reported measures of mobile phone use. This is an interesting setting in which researchers can compare self-reported behavior with logged behavior (see e.g., Boase and Ling (2013)). Two common behaviors to ask about are calling and texting, and two common time frames are “yesterday” and “in the past week.”

Before collecting any data, which of the self-report measures do you think is more accurate? Why?

Recruit five of your friends to be in your survey. Please briefly summarize how these five friends were sampled. Might this sampling procedure induce specific biases in your estimates?

Ask them the following microsurvey questions:

“How many times did you use your mobile phone to call others yesterday?”

“How many text messages did you send yesterday?”

“How many times did you use your mobile phone to call others in the last seven days?”

“How many times did you use your mobile phone to send or receive text messages/SMS in the last seven days?”

Once this microsurvey has been completed, ask to check their usage data as logged by their phone or service provider. How does self-report usage compare to log data? Which is most accurate, which is least accurate?

Now combine the data that you have collected with the data from other people in your class (if you are doing this activity for a class). With this larger dataset, repeat part (d).

[, ] Schuman and Presser (1996) argue that question orders would matter for two types of questions: part-part questions where two questions are at the same level of specificity (e.g., ratings of two presidential candidates); and part-whole questions where a general question follows a more specific question (e.g., asking “How satisfied are you with your work?” followed by “How satisfied are you with your life?”).

They further characterize two types of question order effect: consistency effects occur when responses to a later question are brought closer (than they would otherwise be) to those given to an earlier question; contrast effects occur when there are greater differences between responses to two questions.

Create a pair of part-part questions that you think will have a large question order effect; a pair of part-whole questions that you think will have a large order effect; and a pair of questions whose order you think would not matter. Run a survey experiment on Amazon Mechanical Turk to test your questions.

How large a part-part effect were you able to create? Was it a consistency or contrast effect?

How large a part-whole effect were you able to create? Was it a consistency or contrast effect?

Was there a question order effect in your pair where you did not think the order would matter?

[, ] Building on the work of Schuman and Presser, Moore (2002) describes a separate dimension of question order effect: additive and subtractive effects. While contrast and consistency effects are produced as a consequence of respondents’ evaluations of the two items in relation to each other, additive and subtractive effects are produced when respondents are made more sensitive to the larger framework within which the questions are posed. Read Moore (2002), then design and run a survey experiment on MTurk to demonstrate additive or subtractive effects.

[, ] Christopher Antoun and colleagues (2015) conducted a study comparing the convenience samples obtained from four different online recruiting sources: MTurk, Craigslist, Google AdWords and Facebook. Design a simple survey and recruit participants through at least two different online recruiting sources (these sources can be different from the four sources used in Antoun et al. (2015)).

Compare the cost per recruit—in terms of money and time—between different sources.

Compare the composition of the samples obtained from different sources.

Compare the quality of data between the samples. For ideas about how to measure data quality from respondents, see Schober et al. (2015).

What is your preferred source? Why?

[] In an effort to predict the results of the 2016 EU Referendum (i.e., Brexit), YouGov—an Internet-based market research firm—conducted online polls of a panel of about 800,000 respondents in the United Kingdom.

A detailed description of YouGov’s statistical model can be found at https://yougov.co.uk/news/2016/06/21/yougov-referendum-model/. Roughly speaking, YouGov partitioned voters into types based on 2015 general election vote choice, age, qualifications, gender, and date of interview, as well as the constituency in which they lived. First, they used data collected from the YouGov panelists to estimate, among those who voted, the proportion of people of each voter type who intended to vote Leave. They estimated the turnout of each voter type by using the 2015 British Election Study (BES), a post-election face-to-face survey, which validated turnout from the electoral rolls. Finally, they estimated how many people there were of each voter type in the electorate, based on latest Census and Annual Population Survey (with some addition information from other data sources).

Three days before the vote, YouGov showed a two-point lead for Leave. On the eve of voting, the poll indicated that the result was too close to call (49/51 Remain). The final on-the-day study predicted 48/52 in favor of Remain (https://yougov.co.uk/news/2016/06/23/yougov-day-poll/). In fact, this estimate missed the final result (52/48 Leave) by four percentage points.

Use the total survey error framework discussed in this chapter to assess what could have gone wrong.

YouGov’s response after the election (https://yougov.co.uk/news/2016/06/24/brexit-follows-close-run-campaign/) explained: “This seems in a large part due to turnout—something that we have said all along would be crucial to the outcome of such a finely balanced race. Our turnout model was based, in part, on whether respondents had voted at the last general election and a turnout level above that of general elections upset the model, particularly in the North.” Does this change your answer to part (a)?

[, ] Write a simulation to illustrate each of the representation errors in figure 3.2.

Create a situation where these errors actually cancel out.

Create a situation where the errors compound each other.

[, ] The research of Blumenstock and colleagues (2015) involved building a machine learning model that could use digital trace data to predict survey responses. Now, you are going to try the same thing with a different dataset. Kosinski, Stillwell, and Graepel (2013) found that Facebook likes can predict individual traits and attributes. Surprisingly, these predictions can be even more accurate than those of friends and colleagues (Youyou, Kosinski, and Stillwell 2015).

Read Kosinski, Stillwell, and Graepel (2013), and replicate figure 2. Their data are available at http://mypersonality.org/

Now, replicate figure 3.

Finally, try their model on your own Facebook data: http://applymagicsauce.com/. How well does it work for you?