To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .

To send content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.

Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services
Please confirm that you accept the terms of use.

Given pervasive gridlock at the national level, state legislatures are increasingly the place where notable policy change occurs. Investigating such change is difficult because it is often hard to characterise policy change and use observable data to evaluate theoretical predictions; it is subsequently unclear whether law-making explanations focusing on the US Congress also apply to state legislatures. We use several measures of state policy outcomes to examine lawmaking in state legislatures across nearly two decades, and we argue for using simulation studies to connect theoretical predictions to empirical specifications and help interpret the theoretical relevance of estimated correlations. Doing so reveals that the observed law-making outcomes we study are most consistent with law-making models emphasising the importance of the chamber median and the powers of the governor rather than those that focus on the preferences of the majority party.

Whether public policy affects electoral politics is an enduring question with an elusive answer. We identify the impact of the highly contested Patient Protection and Affordable Care Act (ACA) of 2010 by exploiting cross-state variation created by the 2012 Supreme Court decision in National Federation of Independent Business v. Sebelius. We compare changes in registration and turnout following the expansion of Medicaid in January of 2014 to show that counties in expansion states experience higher political participation compared to similar counties in nonexpansion states. Importantly, the increases we identify are concentrated in counties with the largest percentage of eligible beneficiaries. The effect on voter registration persists through the 2016 election, but an impact on voter turnout is only evident in 2014. Despite the partisan politics surrounding the ACA–a political environment that differs markedly from social programs producing policy feedbacks in the past—our evidence is broadly consistent with claims that social policy programs can produce some political impacts, at least in the short-term.

Scholars of legislative studies typically use ideal point estimates from scaling procedures to test theories of legislative politics. We contend that theory and methods may be better integrated by directly incorporating maintained and to be tested hypotheses in the statistical model used to estimate legislator preferences. In this view of theory and estimation, formal modeling (1) provides auxiliary assumptions that serve as constraints in the estimation process, and (2) generates testable predictions. The estimation and hypothesis testing procedure uses roll call data to evaluate the validity of theoretically derived to be tested hypotheses in a world where maintained hypotheses are presumed true. We articulate the approach using the language of statistical inference (both frequentist and Bayesian). The approach is demonstrated in analyses of the well-studied Powell amendment to the federal aid-to-education bill in the 84th House and the Compromise of 1790 in the 1st House.

Theories of political accountability assume citizens use information about the performance of government to hold public officials accountable, but whether citizens actually use information is difficult to directly examine. We take advantage of the importance of citizen-driven, performance-based accountability for education policy in Tennessee to conduct a survey experiment that identifies the effect of new information, mistaken beliefs and differing considerations on the evaluation of public officials and policy reforms using 1,500 Tennesseans. Despite an emphasis on reporting outcomes for school accountability policies in the state, mistaken beliefs are prevalent and produce overly optimistic assessments of the institutions responsible for statewide education policy. Moreover, individuals update their assessments of these institutions in an unbiased way when provided with objective performance data about overall student performance. Providing additional information about race-related performance differences does not alter this relationship, however. Finally, support for specific policies that are intended to improve student performance is unchanged by either type of performance information; opinions about policy reforms are instead most related to race and existing partisan commitments.

After the 2012 Republican New Hampshire primary, 159 poll results were released prior to the subsequent nomination contests in the Republican presidential primary. More than two-thirds of these polls relied on interactive voice response (IVR) software to conduct the interviews. We evaluate the ability of polls to predict the vote-share for the Republican candidates Romney, Santorum, and Gingrich. We find no overall difference in the average accuracy of IVR and traditional human polls, but IVR polls conducted prior to human polls are significantly poorer predictors of election outcomes than traditional human polls even after controlling for characteristics of the states, polls, and electoral environment. These findings provide suggestive, but not conclusive, evidence that pollsters may take cues from one another given the stakes involved. If so, reported polls should not be assumed to be independent of one another and so-called poll-of-polls will be misleadingly precise.

In “The Strategic Logic of Suicide Terrorism,” Robert Pape (2003) presents an analysis of his suicide terrorism data. He uses the data to draw inferences about how territorial occupation and religious extremism affect the decision of terrorist groups to use suicide tactics. We show that the data are incapable of supporting Pape's conclusions because he “samples on the dependent variable.”—The data only contain cases in which suicide terror is used. We construct bounds (Manski, 1995) on the quantities relevant to Pape's hypotheses and show exactly how little can be learned about the relevant statistical associations from the data produced by Pape's research design.

The study of bureaucracies and their relationship to political actors is central to understanding the policy process in the United States. Studying this aspect of American politics is difficult because theories of agency behavior, effectiveness, and control often require measures of administrative agencies' policy preferences, and appropriate measures are hard to find for a broad spectrum of agencies. We propose a method for measuring agency preferences based upon an expert survey of agency preferences for 82 executive agencies in existence between 1988 and 2005. We use a multirater item response model to provide a principled structure for combining subjective ratings based on scholarly and journalistic expertise with objective data on agency characteristics. We compare the resulting agency preference estimates and standard errors to existing alternative measures, discussing both the advantages and limitations of the method.

Scoring lawmakers based upon the votes they cast while serving in Congress is both commonplace and politically consequential. However, scoring legislators' voting records is not without its problems—even when performed by organizations without a specific policy agenda. A telling illustration of the impact that these scores have on political debate recently arose in the 2004 Democratic presidential primaries.

Existing preference estimation procedures do not incorporate the full structure of the spatial model of voting, as they fail to use the sequential nature of the agenda. In the maximum likelihood framework, the consequences of this omission may be far-reaching. First, information useful for the identification of the model is neglected. Specifically, information that identifies the proposal locations is ignored. Second, the dimensionality of the policy space may be incorrectly estimated. Third, preference and proposal location estimates are incorrect and difficult to interpret in terms of the spatial model. We also show that the Bayesian simulation approach to ideal point estimation (Clinton et al. 2000; Jackman 2000) may be improved through the use of information about the legislative agenda. This point is illustrated by comparing several preference estimators of the first U.S. House (1789–1791).

Recommend this

Email your librarian or administrator to recommend adding this to your organisation's collection.