# generated by /homes/nber/adrepec/bin/nbrred running on mysql0
Template-Type: ReDIF-Paper 1.0
Title: Anchoring Effects in the HRS: Experimental and Nonexperimental Evidence
Classification-JEL: I10; J26
Author-Name: Michael D. Hurd
Author-Person: phu137
Number: 0219
Creation-Date: 1998-02
Order-URL: http://www.nber.org/papers/t0219
File-URL: http://www.nber.org/papers/t0219.pdf
File-Format: application/pdf
Abstract: The Health and Retirement Study (HRS) and a number of other major household surveys use unfolding brackets to reduce item nonresponse. However, the initial entry point into a bracketing sequence is likely to act as an anchor or point of reference to the respondent: The distribution of responses among those bracketed would be influenced by the entry point. For example, when the initial entry point is high the distribution will be shifted to the right one to believe that holdings of the particular asset are greater than they truly are. This paper has two goals. The first is to analyze some experimental data on housing value from HRS wave 3 for anchoring effects. The second is to compare the distributions of assets in HRS waves 1 and 2 for evidence about any anchoring effects that may have been caused by changes in the entry points between the waves. Both the experimental data on housing values and the nonexperimental data from HRS waves 1 and 2 on assets show anchoring effects. The conclusion is that to estimate accurately wealth change in panel data sets, we need a method of correcting for anchoring effects such as random entry into the bracketing sequence.
Handle: RePEc:nbr:nberte:0219
Template-Type: ReDIF-Paper 1.0
Title: An Analysis of Sample Attrition in Panel Data: The Michigan Panel Study of Income Dynamics
Classification-JEL: C2; C3
Author-Name: John Fitzgerald
Author-Name: Peter Gottschalk
Author-Name: Robert Moffitt
Author-Person: pmo48
Number: 0220
Creation-Date: 1998-02
Order-URL: http://www.nber.org/papers/t0220
File-URL: http://www.nber.org/papers/t0220.pdf
File-Format: application/pdf
Publication-Status: published as Fitzgerald, John, Peter Gottschalk and Robert Moffitt. "The Impact Of Attrition In The Panel Study Of Income Dynamics On Intergenerational Analysis," Journal of Human Resources, 1998, v33(2,Spring), 300-344.
Abstract: By 1989 the Michigan Panel Study on Income Dynamics (PSID) had experienced approximately 50 percent sample loss from cumulative attrition from its initial 1968 membership. We study the effect of this attrition on the unconditional distributions of several socioeconomic variables and on the estimates of several sets of regression coefficients. We provide a statistical framework for conducting tests for attrition bias that draws a sharp distinction between selection on unobservables and on observables and that shows that weighted least squares can generate consistent parameter estimates when selection is based on observables, even when they are endogenous. Our empirical analysis shows that attrition is highly selective and is concentrated among lower socioeconomic status individuals. We also show that attrition is concentrated among those with more unstable earnings, marriage, and migration histories. Nevertheless, we find that these variables explain very little of the attrition in the sample, and that the selection that occurs is moderated by regression-to-the-mean effects from selection on transitory components that fade over time. Consequently, despite the large amount of attrition, we find no strong evidence that attrition has seriously distorted the representativeness of the PSID through 1989, and considerable evidence that its cross-sectional representativeness has remained roughly intact.
Handle: RePEc:nbr:nberte:0220
Template-Type: ReDIF-Paper 1.0
Title: A Research Assistant's Guide to Random Coefficients Discrete Choice Models of Demand
Author-Name: Aviv Nevo
Author-Person: pne133
Number: 0221
Creation-Date: 1998-02
Order-URL: http://www.nber.org/papers/t0221
File-URL: http://www.nber.org/papers/t0221.pdf
File-Format: application/pdf
Abstract: The study of differentiated-products markets is a central part of empirical industrial organization. Questions regarding market power, mergers, innovation, and valuation of new brands are addressed using cutting-edge econometric methods and relying on economic theory. Unfortunately, difficulty of use and computational costs have limited the scope of application of recent developments in one of the main methods for estimating demand for differentiated products: random coefficients discrete choice models. As our understanding of these models of demand has increased, both the difficulty and costs have been greatly reduced. This paper carefully discusses the latest innovations in these methods with the hope of (1) increasing the understanding, and therefore the trust, among researchers who never used these methods, and (2) reducing the difficulty of use, and therefore aiding in realizing the full potential of these methods.
Handle: RePEc:nbr:nberte:0221
Template-Type: ReDIF-Paper 1.0
Title: Maximum Likelihood Estimation of Discretely Sampled Diffusions: A Closed-Form Approach
Classification-JEL: G12; C13
Author-Name: Yacine Ait-Sahalia
Author-Person: pai23
Number: 0222
Creation-Date: 1998-02
Order-URL: http://www.nber.org/papers/t0222
File-URL: http://www.nber.org/papers/t0222.pdf
File-Format: application/pdf
Publication-Status: published as "Maximum Likelihood Estimation of Discretely Sampled Diffusions: A Closed-Form Approach", Econometrica, Vol. 70, pp. 223-262, (2002).
Abstract: When a continuous-time diffusion is observed only at discrete dates, not necessarily close together, the likelihood function of the observations is in most cases not explicitly computable. Researchers have relied on simulations of sample paths in between the observations points, or numerical solutions of partial differential equations, to obtain estimates of the function to be maximized. By contrast, we construct a sequence of fully explicit functions which we show converge under very general conditions, including non-ergodicity, to the true (but unknown) likelihood function of the discretely-sampled diffusion. We document that the rate of convergence of the sequence is extremely fast for a number of examples relevant in finance. We then show that maximizing the sequence instead of the true function results in an estimator which converges to the true maximum-likelihood estimator and shares its asymptotic properties of consistency, asymptotic normality and efficiency. Applications to the valuation of derivative securities are also discussed.
Handle: RePEc:nbr:nberte:0222
Template-Type: ReDIF-Paper 1.0
Title: Overidentification Tests with Grouped Data
Author-Name: Caroline Hoxby
Author-Person: pho46
Author-Name: M. Daniele Paserman
Author-Person: ppa129
Number: 0223
Creation-Date: 1998-02
Order-URL: http://www.nber.org/papers/t0223
File-URL: http://www.nber.org/papers/t0223.pdf
File-Format: application/pdf
Abstract: This paper examines the validity of overidentification tests and exogeneity tests in the presence of grouped data. We find that even a small intra-group correlation, when instruments do not vary within groups, may generate a substantial bias in the standard overidentification tests described in textbooks.
Handle: RePEc:nbr:nberte:0223
Template-Type: ReDIF-Paper 1.0
Title: Monotone Instrumental Variables with an Application to the Returns to Schooling
Author-Name: Charles F. Manski
Author-Person: pma111
Author-Name: John V. Pepper
Number: 0224
Creation-Date: 1998-02
Order-URL: http://www.nber.org/papers/t0224
File-URL: http://www.nber.org/papers/t0224.pdf
File-Format: application/pdf
Publication-Status: published as Manski, Charles F. and John V. Pepper. "Monotone Instrumental Variables With An Application To The Returns To Schooling," Econometrica, 2000, v68(4,Jul), 997-1010.
Abstract: Econometric analyses of treatment response commonly use instrumental variable (IV) assumptions to identify treatment effects. Yet the credibility of IV assumptions is often a matter of considerable disagreement, with much debate about whether some covariate is or is not a "valid instrument" in an application of interest. There is therefore good reason to consider weaker but more credible assumptions. assumptions. To this end, we introduce monotone instrumental variable (MIV) A particularly interesting special case of an MIV assumption is monotone treatment selection (MTS). IV and MIV assumptions may be imposed alone or in combination with other assumptions. We study the identifying power of MIV assumptions in three informational settings: MIV alone; MIV combined with the classical linear response assumption; MIV combined with the monotone treatment response (MTR) assumption. We apply the results to the problem of inference on the returns to schooling. We analyze wage data reported by white male respondents to the National Longitudinal Survey of Youth (NLSY) and use the respondent's AFQT score as an MIV. We find that this MIV assumption has little identifying power when imposed alone. However combining the MIV assumption with the MTR and MTS assumptions yields fairly tight bounds on two distinct measures of the returns to schooling.
Handle: RePEc:nbr:nberte:0224
Template-Type: ReDIF-Paper 1.0
Title: Solving Dynamic Equilibrium Models by a Method of Undetermined Coefficients
Classification-JEL: C6; C63
Author-Name: Lawrence J. Christiano
Author-Person: pch45
Number: 0225
Creation-Date: 1998-02
Order-URL: http://www.nber.org/papers/t0225
File-URL: http://www.nber.org/papers/t0225.pdf
File-Format: application/pdf
Abstract: I present an undetermined coefficients method for obtaining a linear approximating to the solution of a dynamic, rational expectations model. I also show how that solution can be used to compute the model's implications for impulse response functions and for second moments.
Handle: RePEc:nbr:nberte:0225
Template-Type: ReDIF-Paper 1.0
Title: Regression-Based Tests of Predictive Ability
Classification-JEL: C52; C53
Author-Name: Kenneth D. West
Author-Person: pwe16
Author-Name: Michael W. McCracken
Number: 0226
Creation-Date: 1998-03
Order-URL: http://www.nber.org/papers/t0226
File-URL: http://www.nber.org/papers/t0226.pdf
File-Format: application/pdf
Publication-Status: published as International Economic Review, Vol. 39 (1998): 817-840.
Abstract: We develop regression-based tests of hypotheses about out of sample prediction errors. Representative tests include ones for zero mean and zero correlation between a prediction error and a vector of predictors. The relevant environments are ones in which predictions depend on estimated parameters. We show that standard regression statistics generally fail to account for error introduced by estimation of these parameters. We propose computationally convenient test statistics that properly account for such error. Simulations indicate that the procedures can work well in samples of size typically available, although there sometimes are substantial size distortions.
Handle: RePEc:nbr:nberte:0226
Template-Type: ReDIF-Paper 1.0
Title: Net Health Benefits: A New Framework for the Analysis of Uncertainty in Cost-Effectiveness Analysis
Classification-JEL: I1
Author-Name: Aaron A. Stinnett
Author-Name: John Mullahy
Number: 0227
Creation-Date: 1998-03
Order-URL: http://www.nber.org/papers/t0227
File-URL: http://www.nber.org/papers/t0227.pdf
File-Format: application/pdf
Abstract: In recent years, considerable attention has been devoted to the development of statistical methods for the analysis of uncertainty in cost-effectiveness analysis, with a focus on situations in which the analyst has patient-level data on the costs and health effects of alternative interventions. To date, discussions have focused almost exclusively on addressing the practical challenges involved in estimating confidence intervals for CE ratios. However, the general approach of using confidence intervals to convey information about uncertainty around CE ratio estimates suffers from theoretical limitations that render it inappropriate in many situations. We present an alternative framework for analyzing uncertainty in the economic evaluation of health interventions (termed the net health benefits' approach) that is more broadly applicable and that avoids some problems of prior methods. This approach offers several practical and theoretical advantages over the analysis of CE ratios, is straightforward to apply, and highlights some important principles in the theoretical underpinnings of CEA.
Handle: RePEc:nbr:nberte:0227
Template-Type: ReDIF-Paper 1.0
Title: Much Ado About Two: Reconsidering Retransformation and the Two-Part Model in Health Economics
Classification-JEL: I1; C2
Author-Name: John Mullahy
Number: 0228
Creation-Date: 1998-03
Order-URL: http://www.nber.org/papers/t0228
File-URL: http://www.nber.org/papers/t0228.pdf
File-Format: application/pdf
Publication-Status: published as Mullahy, John. "Much Ado Abut Two: Reconsidering Retransformation And The Two-Part Model In Health Econometrics," Journal of Health Economics, 1998, v17(3,Jun), 247-281.
Abstract: In health economics applications involving outcomes (y) and covariates (x), it is often the case that the central inferential problems of interest involve E[y|x] and its associated partial effects or elasticities. Many such outcomes have two fundamental statistical properties: yò0; and the outcome y=0 is observed with sufficient frequency that the zeros cannot be ignored econometrically. Common approaches to estimation in such instances include Tobit, selection, and two-part models. This paper (1) describes circumstances where the standard two-part model with homoskedastic retransformation will fail to provide consistent inferences about important policy parameters; and (2) demonstrates some alternative approaches that are likely to prove helpful in applications.
Handle: RePEc:nbr:nberte:0228
Template-Type: ReDIF-Paper 1.0
Title: Instrumental Variables Estimation of Quantile Treatment Effects
Classification-JEL: C13; C14
Author-Name: Alberto Abadie
Author-Person: pab7
Author-Name: Joshua D. Angrist
Author-Person: pan29
Author-Name: Guido W. Imbens
Number: 0229
Creation-Date: 1998-03
Order-URL: http://www.nber.org/papers/t0229
File-URL: http://www.nber.org/papers/t0229.pdf
File-Format: application/pdf
Abstract: This paper introduces an instrumental variables estimator for the effect of a binary treatment on the quantiles of potential outcomes. The quantile treatment effects (QTE) estimator accommodates exogenous covariates and reduces to quantile regression as a special case when treatment status is exogenous. Asymptotic distribution theory and computational methods are derived. QTE minimizes a piecewise linear objective function for which a local minimum can be obtained using a modified Barrodale-Roberts algorithm. The QTE estimator is illustrated by estimating the effect of childbearing on the distribution of family income.
Handle: RePEc:nbr:nberte:0229
Template-Type: ReDIF-Paper 1.0
Title: Combining Panel Data Sets with Attrition and Refreshment Samples
Author-Name: Keisuke Hirano
Author-Name: Guido W. Imbens
Author-Person: pim4
Author-Name: Geert Ridder
Author-Name: Donald B. Rebin
Number: 0230
Creation-Date: 1998-04
Order-URL: http://www.nber.org/papers/t0230
File-URL: http://www.nber.org/papers/t0230.pdf
File-Format: application/pdf
Publication-Status: published as Hirano, Keisuke, Guido W. Imbens, Geert Ridder and Donald B. Rubin. "Combining Panel Data Sets With Attrition And Refreshment Samples," Econometrica, 2001, v69(6,Dec), 1645-1659.
Abstract: In many fields researchers wish to consider statistical models that allow for more complex relationships than can be inferred using only cross-sectional data. Panel or longitudinal data where the same units are observed repeatedly at different points in time can often provide the richer data needed for such models. Although such data allows researchers to identify more complex models than cross-sectional data, missing data problems can be more severe in panels. In particular, even units who respond in initial waves of the panel may drop out in subsequent waves, so that the subsample with complete data for all waves of the panel can be less representative of the population than the original sample. Sometimes, in the hope of mitigating the effects of attrition without losing the advantages of panel data over cross-sections, panel data sets are augmented by replacing units who have dropped out with new units randomly sampled from the original population. Following Ridder (1992), who used these replacement units to test some models for attrition, we call such additional samples refreshment samples. We explore the benefits of these samples for estimating models of attrition. We describe the manner in which the presence of refreshment samples allows the researcher to test various models for attrition in panel data, including models based on the assumption that missing data are missing at random (MAR, Rubin, 1976; Little and Rubin, 1987). The main result in the paper makes precise the extent to which refreshment samples are informative about the attrition process; a class of non-ignorable missing data models can be identified without making strong distributional or functional form assumptions if refreshment samples are available.
Handle: RePEc:nbr:nberte:0230
Template-Type: ReDIF-Paper 1.0
Title: Efficient Intertemporal Allocations with Recursive Utility
Classification-JEL: D81; D61
Author-Name: Bernard Dumas
Author-Name: Raman Uppal
Author-Name: Tan Wang
Number: 0231
Creation-Date: 1998-04
Order-URL: http://www.nber.org/papers/t0231
File-URL: http://www.nber.org/papers/t0231.pdf
File-Format: application/pdf
Publication-Status: published as Dumas, Bernard, Raman Uppal and Tan Wang. "Efficient Intertemporal Allocations With Recursive Utility," Journal of Economic Theory, 2000, v93(2,Aug), 240-259.
Abstract: In this article, our objective is to determine efficient allocations in economies with multiple agents having recursive utility functions. Our main result is to show that in a multiagent economy, the problem of determining efficient allocations can be characterized in terms of a single value function (that of a social planner), rather than multiple functions (one for each investor), as has been proposed thus far (Duffie, Geoffard and Skiadas (1994)). We then show how the single value function can be identified using the familiar technique of stochastic dynamic programming. We achieve these goals by first extending to a stochastic environment Geoffard's (1996) concept of variational utility and his result that variational utility is equivalent to recursive utility, and then using these results to characterize allocations in a multiagent setting.
Handle: RePEc:nbr:nberte:0231
Template-Type: ReDIF-Paper 1.0
Title: Solutions to Linear Rational Expectations Models: A Compact Exposition
Classification-JEL: C32; C63
Author-Name: Bennett T. McCallum
Number: 0232
Creation-Date: 1998-04
Order-URL: http://www.nber.org/papers/t0232
File-URL: http://www.nber.org/papers/t0232.pdf
File-Format: application/pdf
Publication-Status: published as McCallum, Bennett T. "Solutions To Linear Rational Expectations Models: A Compact Exposition," Economics Letters, 1998, v61(2,Nov), 143-147.
Abstract: An elementary exposition is presented of a convenient and practical solution procedure for a broad class of linear rational expectations models. The undetermined-coefficient approach utilized keeps the mathematics very simple and permits consideration of alternative solution criteria.
Handle: RePEc:nbr:nberte:0232
Template-Type: ReDIF-Paper 1.0
Title: An Optimization-Based Econometric Framework for the Evaluation of Monetary Policy: Expanded Version
Classification-JEL: E37
Author-Name: Julio J. Rotemberg
Author-Person: pro30
Author-Name: Michael Woodford
Author-Person: pwo3
Number: 0233
Creation-Date: 1998-05
Order-URL: http://www.nber.org/papers/t0233
File-URL: http://www.nber.org/papers/t0233.pdf
File-Format: application/pdf
Abstract: This paper considers a simple quantitative model of output, interest rate and inflation determination in the United States, and uses it to evaluate alternative rules by which the Fed may set interest rates. The model is derived from optimizing behavior under rational expectations, both on the part of the purchasers of goods and upon that of the sellers. The model matches the estimates responses to a monetary policy shock quite well and, once due account is taken of other disturbances, can account for our data nearly as well as an unrestricted VAR. The monetary policy rule that most reduces inflation variability (and is best on this account) requires very variable interest rates, which in turn is possible only in the case of a high average inflation rate. But even in the case of a constrained-optimal policy, that takes into account some of the costs of average inflation and constrains the variability of interest rates so as to keep average inflation low, inflation would be stabilized considerably more and output stabilized considerably less than under our estimates of current policy. Moreover, this constrained-optimal policy also allows average inflation to be much smaller. This version contains additional details of our derivations and calculations, including three technical appendices, not included in the version published in NBER Macroeconomics Annual 1997.
Handle: RePEc:nbr:nberte:0233
Template-Type: ReDIF-Paper 1.0
Title: A Simple Framework for Nonparametric Specification Testing
Classification-JEL: C14; C12
Author-Name: Glenn Ellison
Author-Person: pel10
Author-Name: Sara Fisher Ellison
Number: 0234
Creation-Date: 1998-09
Order-URL: http://www.nber.org/papers/t0234
File-URL: http://www.nber.org/papers/t0234.pdf
File-Format: application/pdf
Publication-Status: published as Journal of Econometrics, Vol. 96, no. 1 (2000): 1-23.
Abstract: This paper presents a simple framework for testing the specification of parametric conditional means. The test statistics are based on quadratic forms in the residuals of the null model. Under general assumptions the test statistics are asymptotically normal under the null. With an appropriate choice of the weight matrix, the tests are shown to be consistent and to have good local power. Specific implementations involving matrices of bin and kernel weights are discussed. Finite sample properties are explored in simulations and an application to some parametric models of gasoline demand is presented.
Handle: RePEc:nbr:nberte:0234
Template-Type: ReDIF-Paper 1.0
Title: Sorting Out Sorts
Classification-JEL: G12; C22
Author-Name: Jonathan B. Berk
Number: 0235
Creation-Date: 1998-09
Order-URL: http://www.nber.org/papers/t0235
File-URL: http://www.nber.org/papers/t0235.pdf
File-Format: application/pdf
Publication-Status: published as Journal of Finance, Vol. 55 (2000): 407-427.
Abstract: In this paper we analyze the theoretical implications of sorting data into groups and then running asset pricing tests within each group. We show that the way this procedure is implemented introduces a severe bias in favor of rejecting the model under consideration. By simply picking enough groups to sort into even the true asset pricing model can be shown to have no explanatory power within each group.
Handle: RePEc:nbr:nberte:0235