NBER Research Associate Robert F. Engle of New York University's Stern School of Business will share the 2003 Nobel Prize in Economics with Clive W. J. Granger.

Engle has been affiliated with the NBER since 1987 and is a member of the Program on Asset Pricing. He and Granger were awarded the prize for their development of statistical techniques used to measure investment risk and to track economic trends.

He now joins a long list of NBER researchers who have received the Prize, including: George Akerlof, Michael Spence, and Joseph E. Stiglitz in 2001; James J. Heckman and Daniel L. McFadden, 2000; Robert C. Merton and Myron S. Scholes, 1997; Robert E. Lucas, Jr., 1995; and Robert W. Fogel, 1993. Other NBER researchers who have won the Nobel Prize in Economics are Simon S. Kuznets,Milton Friedman, Theodore W. Schultz, George J. Stigler, and Gary S. Becker.

Augustin Landier, University of Chicago, and David Thesmar, INSEE, "Financial Contracting with Optimistic Entrepreneurs: Theory and Evidence"
Discussant: Jeremy C. Stein, NBER and Harvard University

There has been comparatively little analysis to date of the effects of changes in marginal tax rates on entrepreneurial entry. Yet the elasticity of entrepreneurial decisions with respect to tax changes is likely to be greater than with respect to decisions about hours worked, and recent research has linked entrepreneurship, mobility, and household wealth accumulation. Both the level and the progressivity of tax rates can affect decisions about risky activities. The tax system offers insurance for taking risk because taxes depend on outcomes; however, asymmetric taxes on different outcomes, such as progressive rates, may discourage risk taking. Using the Panel Study of Income Dynamics for 1979-93, Gentry and Hubbard incorporate both of these effects of the tax system in estimating the probability that people enter self-employment. While the level of the marginal tax rate does not affect entry into self-employment in a consistent manner across specifications, the progressivity of marginal tax rates does discourage entry into self-employment and into business ownership. The estimates of the effects of the convexity of the tax schedule on entrepreneurial entry are rather large. For example, Gentry and Hubbard estimate that the Omnibus Budget Reconciliation Act of 1993, which raised the top marginal tax rate, lowered the probability of entry into self employment for upper-middle-income households by as much as 20 percent. These estimated effects are robust to controlling for differences in family structure, spousal income, and measures of transitory income.

Dynastic management is the inter-generational transmission of control over assets typical of family-owned firms. It is pervasive around the world, but especially in developing countries. Caselli and Gennaioli argue that dynastic management is a potential source of inefficiency: if the heir to the family firm has no talent for managerial decisionmaking, then meritocracy fails. They present a simple model of the macroeconomic causes and consequences of this phenomenon. In their model, the incidence of dynastic management depends on the severity of asset-market imperfections, on the economy's saving rate, and on the degree of inheritability of talent across generations. The authors introduce novel channels through which financial-market failures and saving rates affect aggregate total factor productivity. Their simulations suggest that dynastic management may be a substantial contributor to observed cross-country differences in productivity.

Recent theoretical literature in development economics has shown that non-convex production technologies can result in low-growth poverty traps. McKenzie and Woodruff use detailed microenterprise surveys in Mexico to examine the empirical evidence for these non-convexities at low levels of capital stock. While the theory emphasizes non-divisible start-up costs that exceed the wealth of many potential entrepreneurs, the authors show start-up costs to be very low in some industries. Much higher returns are found at low levels of capital stock than at higher levels, and this remains true after controlling for firm characteristics and measures of entrepreneurial ability. Overall, the authors find little evidence of production non-convexities at low levels of capital. The absence of non-convexities is significant, because it suggests that access to startup capital does not determine the ultimate size of the enterprise.

Landier and Thesmar look at the effects of entrepreneurial optimism on financial contracting and corporate performance. Optimism may increase effort, but is bad for adaptation decisions, because the entrepreneur underweights negative information. The first-best contract with an optimist uses contingencies for two distinct purposes: 1) "bridging the gap in beliefs" by letting the entrepreneur take a bet on his project's success; and 2) imposing adaptation decisions in bad states. When the contract space is restricted to debt, there may be a separating equilibrium in which optimists self-select in short-term debt and realists in long-term debt. The authors apply their theory to a large dataset of entrepreneurs. First, they find that differences in beliefs may be (partly) explained by the usual determinants put forward in psychology and management literature. Second, in line with the two main predictions of their model, they find that optimists tend to borrow more short term and that those who borrow more short term perform better. Finally, firms run by optimists tend to grow less, die sooner, and to be less profitable. The authors view this as a confirmation that their measure of optimism does not proxy high risk - high returns projects.

The NBER's Program on International Finance and Macroeconomics met in Cambridge on October 10. Charles M. Engel, NBER and University of Wisconsin, and Linda Tesar, NBER and University of Michigan, organized this program:

Paul R. Bergin, NBER and University of California, Davis, and Reuven Glick, Federal Reserve Bank of San Francisco, "Endogenous Nontradability and Macroeconomics Implications"
Discussant: Paolo Pesenti, Federal Reserve Bank of New York

Robert P. Flood, International Monetary Fund, and Andrew K. Rose, NBER and University of California, Berkeley, "Equity Integration in Times of Crisis"
Discussant: Maria Vassalou, Columbia University

Enrique G. Mendoza, NBER and University of Maryland, and Katherine A. Smith, U.S. Naval Academy, "Margin Calls, Trading Costs, and Asset Prices in Emerging Markets: The Financial Mechanics of the 'Sudden Stop' Phenomenon"
Discussant: Fabrizio Perri, New York University

Giancarlo Corsetti, University of Rome; Bernardo Guimares, Yale University; and Nouriel Roubini, NBER and New York University, "International Lending of Last Resort and Moral Hazard: A Model of IMF's Catalytic Finance"
Discussant: Olivier Jeanne, International Monetary Fund

Bergin and Glick propose a new way of thinking about nontraded goods in an open-economy macro model. They develop a simple method for analyzing a continuum of goods with heterogeneous trade costs, and explore how these costs determine the endogenous decision by a seller of whether to trade a good internationally. This way of thinking is appealing in that it provides a natural explanation for a prominent puzzle in international macroeconomics: that the relative price of nontraded goods tends to move much less volatilely than the real exchange rate. Because nontradedness is an endogenous decision, the good on the margin forms a linkage between the prices of traded and nontraded goods, preventing the two price indexes from wandering too far apart. Bergin and Glick find that this mechanism has implications for other macroeconomic issues that rely on the presence of nontraded goods.

Using a new dataset of 369 manufacturing firms in developing countries, Chari and Henry present the first firm-level analysis of capital account liberalization and investment. In the three-year period following liberalizations, the growth rate of the typical firm's capital stock exceeds its pre-liberalization mean by an average of 4.1 to 5.4 percentage points per year. The authors use a simple model of Tobin's q to decompose the firms' post-liberalization changes in investment into: 1) the country-specific change in the risk-free rate; 2) firm-specific changes in equity premiums; and 3) firm-specific changes in expected future earnings. Panel data estimations show that an increase in expected future earnings of 1 percentage point predicts a 2.9 to 4.1 percentage point per-year increase in capital stock growth. The country-specific shock to firms' costs of capital predicts a 2.3 percentage point per-year increase in investment, but firm-specific changes in risk premiums are not significant. These results stand in contrast to the view that investment and fundamentals are unrelated during liberalization episodes.

Flood and Rose apply a simple new test for asset integration to two episodes of crisis in financial markets. Their technique is based tightly on a general intertemporal asset-pricing model, and relies on estimating and comparing expected risk-free rates across assets. Expected risk-free rates are allowed to vary freely over time, constrained only by the fact that they are equal across (risk-adjusted) assets. Assets are allowed to have general risk characteristics, and are constrained only by a factor model of covariances over short time periods. The technique is undemanding in terms of both data and estimation. The authors find that expected risk-free rates vary dramatically over time, unlike short interest rates. The S&P 500 Market seems to be generally well integrated, but the level of integration falls temporarily during the Long-Term Capital Markets crisis of October 1998. By way of contrast, the Korean stock market generally remains internally integrated through the Asian crisis of 1997. The level of equity integration between Japan and Korea is low and falls further during late 1997.

"Sudden Stops" experienced during emerging markets crises are characterized by large reversals of capital inflows and the current account, deep recessions, and collapses in asset prices. Mendoza and Smith propose an open-economy equilibrium asset-pricing model in which financial frictions cause Sudden Stops. Margin requirements impose a collateral constraint on foreign borrowing by domestic agents and trading costs distort asset trading by foreign securities firms. At equilibrium, margin constraints may or may not bind depending on portfolio decisions and equilibrium asset prices. If margin constraints do not bind, then productivity shocks cause a moderate fall in consumption and a widening current account deficit. If debt is high relative to asset holdings, then the same productivity shocks trigger margin calls, forcing domestic agents to fire-sell equity to foreign traders. This sets off a Fisherian asset-price deflation and subsequent rounds of margin calls. A current account reversal and a collapse in consumption occur when equity sales cannot prevent a sharp rise in net foreign assets.

Corsetti and Guimares present an analytical framework for studying how an international institution that provides liquidity can help to stabilize financial markets by coordination of agents' expectations, and how it influences the incentives faced by policymakers to undertake efficiency-enhancing reform. They show that the influence of such an institution increases with the size of its interventions and the precision of its information. More liquidity support and better information make agents more willing to roll over their debt and thus reduces the probability of a crisis. In contrast to the conventional view stressing debtor moral hazard, here liquidity provision and good policies can be strategic complements: the domestic government would not undertake costly policies/reforms unless contingent liquidity assistance was provided.

Lahiri and Singh revisit the issue of the optimal exchange rate regime in a flexible price environment. Their key innovation is analyzing this question in the context of environments where only a fraction of agents participate in asset market transactions (that is, asset markets are segmented). They show that flexible exchange rates are optimal under monetary shocks and fixed exchange rates are optimal under real shocks. These findings are the exact opposite of the standard Mundellian prescription derived via the sticky price paradigm wherein fixed exchange rates are optimal if monetary shocks dominate while flexible rates are optimal if shocks are mostly real. These results suggest that the optimal exchange rate regime should depend not only on the type of shock (monetary versus real) but also on the type of friction (goods market friction versus financial market friction).

The NBER's Program on Economic Fluctuations and Growth held its fall research meeting on October 17 in Chicago. Mark Gertler, NBER and New York University, and Patrick Kehoe, Federal Reserve Bank of Minneapolis, organized this program:

Mikhail Golosov, University of Minnesota, and Robert E. Lucas, Jr., NBER and University of Chicago, "Menu Costs and Phillips Curves"
Discussant: Ricardo Caballero, NBER and MIT

Harold L. Cole, University of California, Los Angeles; Ron Leung, University of Minnesota; and Lee E. Ohanian, NBER and University of California, Los Angeles, "Deflation, Real Wages, and the International Great Depression: A Productivity Puzzle"
Discussant: Lawrence Christiano, NBER and Northwestern University

Richard Rogerson, NBER and Arizona State University, "Structural Transformation and the Deterioration of European Labor Market Outcomes"
Discussant: Daron Acemoglu, NBER and MIT

Martin Lettau, Sydney C. Ludvigson, and Jessica A. Wachter, NBER and New York University, "The Declining Equity Premium: What Role Does Macroeconomic Risk Play?"
Discussant: John Cochrane, NBER and University of Chicago

Golosov and Lucas develop a model of a monetary economy in which individual firms are subject to idiosyncratic productivity shocks as well as general inflation. Sellers can change price only by incurring a real "menu cost." The authors calibrate this cost and the variance and autocorrelation of the idiosyncratic shock using a new U.S. dataset of individual prices from Klenow and Kryvtsov. The prediction of the calibrated model for the effects of high inflation on the frequency of price changes accords well with the Israeli evidence obtained by Lach and Tsiddon. The model also is used to conduct numerical experiments on the economy's response to credible and incredible disinflations and to other shocks. In none of the simulations conducted did monetary shocks induce large or persistent real responses.

The high real wage story is one of the leading hypotheses for how deflation caused the International Great Depression. The story is that world-wide deflation, combined with incomplete nominal wage adjustment raised real wages in a number of countries, and that these higher real wages reduced employment as firms moved up their labor demand curves. Cole, Ohanian, and Leung study the high real wage hypothesis in an international cross section of 17 countries during 1930-33 using dynamic, general equilibrium monetary models. They find that the high real wage story by itself does not account for output changes in the international cross section. The models make large errors predicting output in the international cross section, largely because the correlation between real wages and output in the models is -1, while this correlation is positive in the data. This means that the worldwide Depression was not just firms moving up their labor demand curves in response to high real wages. Instead, accounting for the Depression requires a shock that shifts labor demand curves differentially across countries. The authors add productivity shocks to the model as a candidate labor demand shifter. They find that the productivity shocks in the model are very similar to productivity changes in the data. They also find that productivity shocks account for about 2/3 of output changes, while monetary shocks account for about 1/3 of output changes.

Giannoni and Woodford characterize optimal monetary policy for a range of alternative economic models, applying the general theory developed in their 2002 paper. The rules computed here have the advantage of being optimal regardless of the assumed character of exogenous additive disturbances, although other aspects of the model specification do affect the form of the optimal rule. In each case, optimal policy can be implemented though a flexible inflation targeting rule, under which the central bank is committed to adjusting its interest-rate instrument so as to ensure that projections of inflation and other variables satisfy a target criterion. For any given parameterization of the structural equations, the authors show which additional variables, beyond the inflation projection, should be taken into account, and to what degree. They also explain what relative weights should be placed on projections for different horizons in the target criterion, and the manner and degree to which the target criterion should be history-dependent. They then assess the likely quantitative significance of the various factors considered in the general discussion by estimating a small, structural model of the U.S. monetary transmission with explicit optimizing foundations. An optimal policy rule is computed for the estimated model, and it corresponds to a multi-stage inflation-forecast targeting procedure. Finally, they consider the degree to which actual U.S. policy over the past two decades has conformed to the optimal target criteria.

Bergoeing and Kehoe quantitatively test the "new trade theory" based on product differentiation, increasing returns, and imperfect competition. They use a model that allows both changes in the shares of income among industrialized countries, emphasized by Helpman and Krugman (1985), and nonhomothetic preferences, emphasized by Markusen (1986), to affect trade volumes and directions. In addition, they generalize the model to allow changes in relative prices to have large effects. The authors test the model by calibrating it to 1990 data and then "backcasting" to 1961 to see what changes in crucial variables between 1961 and 1990 are predicted by the theory. The results show that, although the model is capable of explaining much of the increased concentration of trade among industrialized countries, it is not capable of explaining the enormous increase in the ratio of trade to income.

Rogerson makes three key points in his paper. First, he argues that much of the literature on the European labor market problem has misdiagnosed it by focusing on relative unemployment rather than relative employment levels. Specifically, the European labor market problem seems to date back to the mid-1950s. Second, the key to understanding the source of the European labor market problem is an understanding of why Europe has not developed a market service sector more similar to that of the United States as it has closed the gap with the United States in terms of output per hour. Third, Rogerson shows that a story in which productivity differences and/or taxes are central can potentially go a long way toward accounting for the relative deterioration of European labor market outcomes. To be sure, the model analyzed here is very simple and it will be important to see the extent to which the quantitative conclusions are affected by adding various features.

Aggregate stock prices, relative to virtually any sensible indicator of fundamental value, soared to unprecedented levels in the 1990s. Even today, after the market declines since 2000, they remain well above historical norms. Lettau, Ludvigson, and Wachter consider one particular explanation for this: a fall in macroeconomic risk, or the volatility of the aggregate economy. The authors estimate a two-state regime-switching model for the volatility and the mean of consumption growth, and find evidence of a shift to substantially lower consumption volatility at the beginning of the 1990s. They then show that there is a strong and statistically robust correlation between low macroeconomic volatility and high asset prices: the estimated posterior probability of being in a low volatility state explains 30 to 60 percent of the post-war variation in the log price-dividend ratio, depending on the measure of consumption analyzed. Next, the authors study a rational asset pricing model with regime switches in both the mean and standard deviation of consumption growth, where the probabilities of a regime change are calibrated to match estimates from post-war data. Plausible parameterizations of the model account for almost all of the run-up in asset valuation ratios observed in the late 1990s.

The NBER's Program on Public Economics met in Cambridge on October 30-31. Program Director James M. Poterba of MIT organized this agenda:

Maria Cancian, University of Wisconsin, Madison, and Arik Levinson, NBER and Georgetown University, "Labor Supply and Participation Effects of the Earned Income Tax Credit: Evidence from the National Survey of American's Families and Wisconsin's Supplemental Benefit for Families with Three Children"
Discussant: John Karl Scholz, NBER and University of Wisconsin

Zoran Ivkovic, University of Illinois; James Poterba; and Scott Weisbenner, NBER and University of Illinois, "Tax-Motivated Trading By Individual Investors"
Discussant: Aleh Tsyvinsky, NBER and University of California, Los Angeles

Christopher House, University of Michigan, and Matthew D. Shapiro, NBER and University of Michigan, "Phased in Tax Cuts and Economic Activity"
Discussant: Alan J. Auerbach, NBER and University of California, Berkeley

Henrik J. Kleven and Claus T. Kreiner, University of Copenhagen; Herwig Immervoll, University of Cambridge; and Emmanuel Saez, NBER and University of California, Berkeley, "Welfare Reform in European Countries: A Micro-Simulated Analysis"
Discussant: Jorn-Steffen Pischke, NBER and London School of Economics

Cancian and Levinson examine the labor market consequences of the Earned Income Tax Credit (EITC), comparing labor market behavior of eligible parents in Wisconsin, which supplements the federal EITC for families with three children, to that of similar parents in states that do not supplement the federal EITC. Most previous studies have relied on changes in the EITC over time, or on EITC eligibility differences for families with and without children, or have extrapolated from measured labor supply responses to other tax and benefit programs. In contrast, this cross-state comparison examines a larger difference in EITC benefits among families with two or three children.

Bernheim and Rangel construct a new, simple model of savings in which individuals can make mistakes. They use the model to study the impact on savings and welfare of changes in the environment, institutions, and policy. The authors show that this alternative formulation leads to conclusions that are at odds with some of the pre-suppositions of the previous literature. In particular, even though individuals make mistakes involving overconsumption, the authors show that one cannot presuppose that there is under-saving. Paradoxically, in this model individuals aware of their self-control problem can end up over-saving. Further, one cannot presuppose that welfare-improving policies increase savings. In fact, for plausible ranges of parameters, welfare-increasing changes in the environment, institutions, and policies can decrease savings.

Ivkovic, Poterba, and Weisbenner use a large database, containing nearly 100,000 large individual stock purchases, to study the factors that affect the realization of capital gains and losses. These factors include the holding period, the calendar month, and the accrued gain or loss since the time of purchase. A particularly appealing feature of the dataset is the ability to compare investors' realizations in their taxable and tax-deferred accounts. The authors reach four conclusions. First, for large stock purchases, there is a strong lock-in effect for capital gains in taxable accounts after the stock has been held for a few months. Second, there is evidence of trading behavior that is consistent with year-end tax-loss selling. In taxable accounts in December, and especially in the last week of December, investors are more likely to sell losers than winners. The pattern for other months is the opposite. The December selling effect is particularly strong for stocks that qualify for short-term loss treatment. Further, tax-loss selling is greater for investors who have realized gains during the year and when the overall market has risen during the about-to-end calendar year. The demand for loss offsets is likely to be high in these settings. Third, the authors find that wash sale rules affect trading decisions in December, but they do not find similar evidence for other months. The probability that a stock will be repurchased within 30 days, if sold at a loss in December, is substantially lower than the probability of such a repurchase following sales in other months. This is consistent with wash-sale rules affecting tax-motivated trading. There is no evidence that wash sale rules affect trading behavior in months other than December, or that they distort trading decisions in taxable versus tax-deferred accounts. Finally, using a simulation to test whether following simple tax-avoidance strategies would have significantly boosted investors' aftertax returns, the authors find that simple rules that accelerate the realization of tax losses could substantially improve aftertax returns for many investors.

Phased-in tax changes are a common feature of tax legislation. House and Shapiro use a dynamic general equilibrium model to quantify the effects of delaying tax cuts. According to their analysis, the phased-in tax cuts of the 2001 tax bill substantially reduced employment, output, and investment during the phase-in period relative to alternative policies with immediate, but more modest tax cuts. The rules and accounting procedures used by Congress for formulating tax policy have a significant impact in shaping the details of tax policy and they led to the phase-ins, sunsets, and temporary tax changes in both the 2001 and 2003 tax bills.

Immervoll, Kleven, Kreiner, and Saez estimate the welfare and distributional impact of two types of welfare reforms in each of the 15 countries in the European Union. The reforms are revenue neutral and financed by an overall and uniform increase in marginal tax rates on earnings. The first reform distributes the extra taxes evenly to everybody (traditional welfare), while the second reform distributes tax proceeds (uniformly) only to workers (earnings credit). The authors build a simple model of labor supply encompassing responses to taxes and transfers along the intensive and extensive margin. They then use the model to describe current welfare and tax systems in all 15 European countries and use calibrated labor supply elasticities along the intensive and extensive margins to analyze the effects of the two welfare reforms. They precisely quantify the equality-efficiency tradeoff for a range of elasticity parameters. In most countries, because of the large existing welfare programs with high phasing-out rates, the uniform redistribution policy is, in general, undesirable unless the redistributive tastes of the government are extreme. However, redistribution to workers is desirable in a very wide set of cases. The authors discuss the practical policy implications for European welfare policy.

Nishiyama and Smetters examine fundamental tax reform in a heterogeneous overlapping-generations (OLG) model in which agents face idiosyncratic earnings shocks and uncertain life spans. Following Auerbach and Kotlikoff (1987), the authors use a Lump-Sum Redistribution Authority to rigorously examine efficiency gains over the transition path. They replace progressive income tax with a flat consumption tax (for example, a value-added tax, or a national retail sales tax). If shocks are insurable (that is, no risk), this reform improves (interim) efficiency, a result consistent with the previous literature. But if, more realistically, shocks are uninsurable, then this reform reduces efficiency, even though national wealth and output increase over the entire transition path. This efficiency loss, in large part, stems from reduced intragenerational risk sharing that was provided by the progressive tax system.

The NBER's Working Group on Macroeconomics and Individual Decisionmaking met in Cambridge on November 1. George A. Akerlof, University of California, Berkeley, and Robert J. Shiller, NBER and Yale University, organized this program:

James J. Choi, Harvard University; David Laibson, NBER and Harvard University; Brigitte C. Madrian, NBER and University of Chicago; and Andrew Metrick, NBER and University of Pennsylvania, "Active Decisions: A Natural Experiment in Savings"
Discussant: Annamaria Lusardi, Dartmouth College

Decisionmakers overwhelmingly tend to accept default options. In this paper, Choi and his co-authors identify an overlooked but practical alternative to defaults. They analyze the experience of a company that required its employees to affirmatively elect to enroll or not enroll in the company's 401(k) plan. Employees were told that they had to actively make a choice, one way or the other, with no default option. This "active decision" regime provides a neutral middle ground that avoids the paternalism of a one-size-fits-all default election. The active decision approach to 401(k) enrollment yields participation rates that are up to 25 percentage points higher than those under a regime with the standard default of non-enrollment. Requiring employees to make an active 401(k) election also raises average saving rates and asset accumulation with no increase in the rate of attrition from the 401(k) plan.

Different beliefs about how fair social competition is and what determines income inequality influence the redistributive policy democratically chosen in a society. But the composition of income depends first on equilibrium tax policies. If a society believes that individual effort determines income, and that all have a right to enjoy the fruits of their effort, then it will choose low redistribution and low taxes. In equilibrium, effort will be high, the role of luck limited, market outcomes will be quite fair, and social beliefs will be self-fulfilled. If a society instead believes that luck, birth, connections and/or corruption determine wealth, then it will tax a lot, thus distorting allocations and making these beliefs self-sustained as well. Alesina and Angeletos show how this interaction between social beliefs and welfare policies may lead to multiple equilibriums or multiple steady states. They argue that this model can contribute to explaining U.S. vis-a-vis continental-European perceptions about income inequality and the choices of redistributive policies.

Analyzing 50 years of data on inflation expectations from several sources, Mankiw, Reis, and Wolfers document substantial disagreement among consumers and professional economists about expected future inflation. Moreover, this disagreement varies substantially through time, moving with inflation, the absolute value of the change in inflation, and relative price variability. The authors argue that a satisfactory model of economic dynamics must speak to these important business cycle moments. Nothing that most macroeconomic models do not generate disagreement endogenously, the authors show that a simple "sticky-information" model broadly matches many of these facts. Moreover, the sticky-information model is consistent with other observed departures of inflation expectations from full rationality, including autocorrelated forecast errors and insufficient sensitivity to recent macroeconomic news.

Consumers don't always know the true value of the products they buy. Instead, consumers have only an imperfect signal of value and buy the product with the best signal. Gabaix and Laibson embed these consumers in a marketplace of perfectly informed, maximizing firms. The authors first analyze a market in which some consumers do not anticipate all of the future consequences of a current purchase. In such circumstances, firms will choose monopoly prices for shrouded add-ons - like rental car gas tank refills - making the shrouded add-on a profit center and the base good a potential loss leader. Gabaix and Laibson show that such monopoly pricing even will persist in markets with a high degree of competition and free advertising, since firms will choose not to advertise the profitable shrouded attributes. Making the add-on salient leads consumers to find less expensive substitutes for it. The authors then analyze markets in which consumer signals are noisy estimates of the true utility value of products. In equilibrium, noise effectively increases firms' market power and raises markups. The markup is materially insensitive to the degree of competition. When noise is an endogenous variable, firms choose excess noise by making their products inefficiently complex. Moreover, an increase in competition causes firms to choose even more noise or excess complexity. Finally, Gabaix and Laibson propose an econometric framework that measures the amount of bounded rationality in the marketplace.

Chirinko and Schaller use cross-sectional variation between glamour (high stock market price) and value (low stock market price) portfolios to address the possible relationship between misvaluation and fixed investment. In a large sample of U.S. firms over the period 1980-2001, glamour firms invest substantially more than value firms. The difference is roughly the same after controlling for fundamentals. The median glamour firm raises more in new share issues than its total capital expenditures for the year. If glamour firms are responding to misvaluation rather than fundamentals, then they may be investing too much. Chirinko and Schaller describe and implement four tests designed to distinguish whether the high investment of glamour firms is the result of fundamental shocks or misvaluation shocks: investment reversals, stock market returns of high-investment firms, the path over time of the marginal product of capital, and overreaction tests. The evidence is generally more consistent with misvaluation shocks than fundamental shocks as an explanation for the high investment of glamour firms.

Di Tella and MacCulloch find evidence that governments in poor countries have a more left-wing rhetoric than those in OECD countries. One possible explanation for this is that corruption, which is more widespread in poor countries, reduces the electoral appeal of capitalism more than that of socialism. The empirical pattern of beliefs within countries is consistent with this explanation: people who perceive corruption to be high in the country are also more likely to lean left ideologically and to declare to support a more intrusive government in economic matters. Finally, the authors provide a simple model in which it is assumed that corruption under capitalism is informative about private sector productivity and honesty levels, whereas corruption under socialism contains less information (simply reveals honesty levels). There is a negative externality, in the sense that the existence of corrupt entrepreneurs hurts good entrepreneurs by reducing the general appeal of capitalism.

The NBER's Program on Labor Studies met in Cambridge on November 7. Program Director Richard Freeman and Research Associate Lawrence Katz, both of Harvard University, organized the meeting. The following papers were discussed:

Sewin Chan, New York University, and Ann Huff Stevens, NBER and University of California, Davis, "What You Don't Know Can't Help You: Knowledge and Retirement Decisionmaking"

Gordon Dahl, NBER and University of Rochester, and Enrico Moretti, NBER and University of California, Los Angeles, "The Demand for Sons: Evidence from Divorce, Fertility, and Unwed Mothers in the U.S. and Around the World"

Jennifer Hunt, NBER and University of Montreal, "Trust and Bribery: The Role of Quid Pro Quo and the Link with Crime"

Justin McCrary, University of Michigan, "The Effect of Court-Ordered Hiring Quotas on the Composition and Quality of Police"

Chan and Stevens focus on this puzzle: how can individuals respond to detailed pension information that, apparently, they do not possess? Recent evidence shows that most individuals do not know the details of their employer-provided pension plans. At the same time, virtually all recent studies of retirement timing are based on administrative data of which individuals themselves may not be aware. Chan and Stevens use individuals' self-reports to construct measures of knowledge about pension plans. They show that individual responsiveness to pension incentives based on employer-reported data vary dramatically with these knowledge measures. Well-informed individuals have an elasticity of retirement with respect to pension incentives that is three times as large as the average response (which ignores information issues.) In contrast, there is no evidence of a relationship between their pensions and retirement timing among the uninformed. This should encourage researchers to make more use of self-reported data in order to better understand decisionmaking.

Previous research shows a systematic fall in consumption at retirement, a finding that is inconsistent with the life-cycle/permanent income hypothesis. In their paper, Haider and Stephens use workers' beliefs about their expected retirement dates as an instrument for retirement. After demonstrating that subjective retirement expectations are strong predictors of subsequent retirement, they still find a systematic fall in consumption for workers who retire when expected. However, the results suggest that this fall in consumption is half as large as that found when the authors rely instead on the instrumental-variables strategy used in prior studies.

In the United States, parental preferences for sons versus daughters may be manifest in a variety of ways, including effects on marital status and fertility behavior. Dahl and Moretti document that having girls has significant effects on divorce, marriage, shotgun marriage (when the sex of the child is known before birth), and fertility-stopping rules. Using a simple model, they show that, taken individually, each piece of evidence does not necessarily imply the existence of parental gender bias. But taken together, the evidence indicates that parents in the United States favor boys over girls. The authors begin by documenting that mothers with girls are significantly more likely to be divorced than mothers with boys. Further, controlling for family size, women with only girls are substantially more likely to have never been married than women with only boys. Mothers who only have daughters and divorce and then remarry are more likely to get a second divorce. Perhaps the most striking evidence comes from the analysis of shotgun marriages based on Vital Statistics data. Mothers who find out that their child will be a boy are more likely to marry their partner before delivery. Specifically, among those who have an ultrasound test during their pregnancy, mothers carrying a boy are more likely to be married at delivery. In the final part of the paper, the authors extend the analysis to five developing countries. For divorce, the negative effect of an all-daughters family is substantial, twice to eight times as large as in the United States. Comparing the effects on fertility across countries, it is largest for China and Vietnam, with more moderate effects for Mexico, Colombia, and Kenya.

Hunt studies data on bribes actually paid by individuals to public officials, viewing the results through a theoretical lens that considers the implications of trust networks. A bond of trust may permit a quid pro quo to substitute for a bribe, which in some situations is more efficient and honest. Hunt shows that in the presence of quid pro quos, the incidence of bribery may be non-monotonic in client income. She finds evidence of this in the International Crime Victim Surveys, as well as evidence that bribery is less frequent in small towns, where the appropriate networks are more easily established. Older people, who have had time to develop a network, bribe less. The low incidence of bribery among the poor presumably implies reduced access to public services, yet city size, age, gender, and car ownership are more important determinants of bribery than income. Hunt also shows that victims of crimes bribe all types of public officials more than non-victims, and she argues that both their victimization and the bribery stem from a distrustful environment. Hunt finds indirect evidence that criminals are particularly likely to bribe the police and customs. Together, these results suggest that the best start to changing a distrustful environment is combatting corruption in the police and customs.

McCrary examines the role of the federal courts in integrating police departments in the United States. Using a new dataset on police force demographic composition, city demographics, and employment discrimination litigation in 314 large U.S. municipalities, he demonstrates that the filing of a class action lawsuit alleging hiring discrimination against African-Americans is associated with a trend break in the share of police department employment of Blacks. He estimates that the 25-year gain in black-employment share in litigated departments is in the range of 8 to 12 percentage points. Given the low attrition rate of police officers, this is consistent with a hiring fraction for African-Americans that is approximately 12 to 19 percentage points above the pre-litigation police department fraction that is Black. Finally, McCrary finds little evidence that litigation led to increased crime, despite large and persistent differences by race in police department entrance examination scores.

The NBER's Program on Monetary Economics met in Cambridge on November 7. Program Co-Directors Christina D. Romer and David H. Romer, both of University of California, Berkeley, organized this agenda:

James J. Choi, Harvard University; David Laibson, NBER and Harvard University; and Brigitte C. Madrian and Andrew Metrick, NBER and University of Pennsylvania, "Consumption-Wealth Comovement of the Wrong Sign"
Discussant: Matthew D. Shapiro, NBER and University of Michigan

Ignazio Angeloni and Benoît Mojon, European Central Bank; Anil K Kashyap, NBER and University of Chicago; and Daniele Terlizzese, Banca d'Italia, "The Output Composition Puzzle: A Difference in the Monetary Transmission Mechanism in the Euro Area and U.S."
Discussant: Marc Giannoni, Columbia University

Economic theory predicts that an unexpected windfall in wealth should increase consumption as soon as it is received. Choi, Laibson, Madrian, and Metrick test this prediction by using administrative records on over 40,000 401(k) accounts. Contrary to theory, the authors estimate a negative short-run marginal propensity to consume out of orthogonal 401(k) capital gains shocks. Their findings suggest that many investors are influenced by a reinforcement learning heuristic that causes high returns to encourage saving and low returns to discourage saving. These results help explain why consumption covariance with equity returns is so low, giving rise to the equity premium puzzle.

Bordo and Haubrich show that the stylized fact that the yield curve predicts future growth holds for the past 125 years, and is robust across several specifications. The monetary regime seems important, and in accord with the authors' theory, regimes with low credibility (high persistence) tend to have better predictability. This finding reinforces the notion that the monetary regime is critical in interpreting the yield curve, and that the term structure of interest rates is heavily conditioned on the monetary regime. In particular, it may be quite misleading to draw general conclusions from data generated in one inflation regime. And, credibility may be a mixed blessing. While a more credible regime usually will mean that monetary policy is less a source of instability for the economy, credibility itself may make policymaking more difficult, because information sources such as the yield curve become less informative. Still, notions such as credibility are often hard to pin down and measure, and these results suggest an additional metric: the predictive content of the yield, which can provide an additional piece of evidence about the credibility of the regime in question.

Burnside, Eichenbaum, and Rebelo address two questions: how do governments actually pay for the fiscal costs associated with currency crises; and what are the implications of different financing methods for post-crisis rates of inflation and depreciation? They study these questions using a general equilibrium model in which a currency crisis is triggered by prospective government deficits. They then use the model together with fiscal data to interpret government financing in the wake of three recent currency crises: Korea (1997), Mexico (1994), and Turkey (2001).

A number of recent papers have hypothesized that the Federal Reserve possesses information about the course of inflation and output that is unknown to the private sector, and that policy actions by the Federal Reserve convey some of this inside information. Faust, Swanson, and Wright conduct two tests of this hypothesis: 1) could monetary policy surprises be used to improve the private sector's ex ante forecasts of subsequent macroeconomic statistical releases, and 2) does the private sector revise its forecasts of macroeconomic statistical releases in response to these monetary policy surprises? The authors find little evidence that Federal Reserve policy surprises convey inside information about the state of the economy: they could not systematically be used to improve forecasts of statistical releases and the forecasts are not systematically revised in response to policy surprises. One possible exception to this pattern is Industrial Production, a statistic that the Federal Reserve produces.

Despite the amount of empirical research on monetary policy rules, there is surprisingly little consensus on the nature, or even the existence, of changes in the conduct of monetary policy. Three issues appear central to this disagreement: 1) the specific type of changes in the policy coefficients; 2) the treatment of heteroskedasticity; and 3) the real-time nature of the estimation. Boivin treats these issues in the context of a forward-looking Taylor rule with drifting coefficients. His estimation is based on real-time data and accounts for the presence of heteroskedasticity in the policy shock. His findings suggest important but gradual changes in the rule coefficients, not captured adequately by the usual split-sample estimation. The Fed's response to inflation appears to have evolved from a weak response in the mid-1970s, not satisfying Taylor's principle at times, to a stronger response thereafter. Moreover, the Fed's response to real activity fell substantially in the 1970s.

Angeloni, Kashyap, Mojon, and Terlizzese revisit recent evidence on how monetary policy affects output and prices in the United States and in the euro area. The responses to a shift in monetary policy are similar in most respects, but differ noticeably as to the composition of output changes. In the euro area, investment is the predominant driver of output changes, while in the United States consumption shifts are significantly more important. The authors dub this difference "the output composition puzzle" and explore its implications and several potential explanations for it. While the evidence seems to point at differences in consumption responses, rather than investment, as the proximate cause for this fact, the source of the consumption difference remains a puzzle.

The NBER's Working Group on Higher Education met in Cambridge on November 13. Director Charles T. Clotfelter of Duke University organized the meeting. The following papers were discussed:

Christopher M. Cornwell, Kyung Hee Lee, and David B. Mustard, University of Georgia, "The Effects of Merit-Based Financial Aid on Course Enrollment, Withdrawal and Completion in College"
Discussant: Sarah Turner, NBER and University of Virginia

Albert J. Sumell and Paula E. Stephan, Georgia State University, and James D. Adams, NBER and University of Florida, "Capturing Knowledge: The location Decision of New PhDs Working in Industry"
Discussant: John de Figueiredo, NBER and MIT

Thomas J. Kane, NBER and University of California, Los Angeles, "Evaluating the Impact of the DC Tuition Assistance Grant Program"
Discussant: Eric Bettinger, NBER and Case Western Reserve University

Todd R. Stinebrickner, University of Western Ontario, and Ralph Stinebrickner, Berea College, "Credit Constraints and College Attrition"
Discussant: Christopher Avery, NBER and Harvard University

David Marmaros, Google.com, and Bruce Sacerdote, NBER and Dartmouth College, "How Friendships Form"
Discussant: David Zimmerman, NBER and Williams College

Using data from the longitudinal records of all undergraduates who enrolled at the University of Georgia (UGA) between 1989 and 1997, Cornwell, Lee, and Mustard estimate the effects of HOPE scholarships on course enrollment, withdrawal, and completion, and on the diversion of course taking from the academic year to the summer. They find first that HOPE decreases full-load enrollments and increases course withdrawals among resident freshmen. This results in a 12 percent lower probability of full-load completion and an annual average reduction in completed credits of about 0.8 or 2 percent. The latter implies that between 1993 and 1997, Georgia resident freshmen completed almost 12,600 fewer credit hours than non-residents. Second, the scholarship's influence on course-taking behavior is concentrated on students whose GPAs place them on or below the scholarship-retention margin. Third, the effect of the HOPE program increases with the lifting of the income cap. Fourth, these freshmen credit-hour reductions represent a general slowdown in academic progress and not just intertemporal substitution. Finally, resident students diverted an average of 0.5 credits from the regular academic year to the summer in each of their first two summers after matriculation, which amounts to a 22 percent rise in summer course taking.

Sumell, Stephan, and Adams examine the factors that influence the probability that a newly trained PhD will remain "local" or stay in the state. Specifically, they measure how various individual, institutional, and geographic attributes affect the probability that new PhDs who choose to work in industry will stay in the metropolitan area or state of training. Given that the ability to capture knowledge spillovers arguably decreases as distance increases, the authors also examine the distance that new PhDs move to take an industrial position. They focus on PhDs who received their degree in one of twelve fields of science and engineering during the period 1997-9. Data for the study come from the Survey of Earned Doctorates, administered by Science Resources Statistics, National Science Foundation. The authors find that state and local areas do capture knowledge embodied in newly minted PhDs headed to industry, but not at an overwhelming rate. Certain states and metropolitan areas have an especially high attrition rate. Moreover, the related universities are not new but have a long history of producing scientists and engineers. This suggests that training local talent is far from sufficient in fostering an economic environment that encourages retention. The authors also find that retention is related to a number of personal characteristics such as marital status, age, level of debt, previous work experience, and visa status. Retention is also related to the local technological infrastructure.

With the creation of the D.C. Tuition Assistance Grant Program (DC TAG) in the fall of 2000, the menu of college prices offered to residents of the District of Columbia changed dramatically. D.C. residents were offered the opportunity to attend public institutions elsewhere in the country, and to receive a grant to cover the difference between in-state and out-of-state tuition, up to $10,000. (The program also offered smaller grants to those attending private colleges in the D.C. area and private, historically black institutions elsewhere.) According to Kane, the program led to large increases in enrollment of D.C. residents at public institutions around the country, particularly at non-selective, four-year, predominantly black institutions in the mid-Atlantic states. The number of D.C. residents enrolling in college also increased by 16 to 20 percent. Moreover, although the program was not means-tested, there were small differences in recipiency rates between middle and high income neighborhoods.

Working from the financial aid records of individual students at 28 highly selective private colleges and universities (the COFHE schools), Hill, Winston, and Boyd address two questions: what do the highly able low income students at these schools actually pay, net of financial aid grants, for a year's education; and how do these schools differentiate their prices in recognition of the different family incomes of their students - the concrete evidence of their dedication to equality of opportunity? While there is considerable variety in net prices, it turns out that many of these schools charge their low income students very little (one, less than $800 a year for the average student in the bottom income quintile), making it quite reasonable for a highly able student to aspire to go to a very selective private college or university regardless of family income. There is considerable variety among schools, though. Virtually all of them charge students in the bottom income quintile a lower net price, on average, than they do their wealthier students, but at some, the net price as a share of family income rises as incomes increase while at others it falls. Most, however, follow pricing policies that embody rough proportionality between net price and family income over the whole range of the student incomes, including those paying the full sticker price. The net prices that remain to be paid by aided students are covered, of course, by direct payment and "self-help" - by loans and student jobs. In the data, the error in the popular representation of tuition and income is clear: the average sticker price is 66 percent of median U.S. family income, but the average student at that level pays just 23 percent of family income.

Stinebrickner and Stinebrickner examine the effect of credit constraints on college attrition using unique data from the Berea Panel Study. They find that, while short-run liquidity constraints are likely to play an important role in the outcomes of some students, the percentage of attrition that is caused by these constraints is small.

Marmaros and Sacerdote examine how people form social networks among their peers. They use a unique dataset on the volume of email between any two people - in this case, some students and recent graduates of Dartmouth College. Their main finding is that geographic proximity and race are far more important determinants of friendship than are common interests or common majors. The effects of race are quite large; for example, two randomly chosen black students are seven times more likely to interact than are a black student and a white student. Nonetheless, there still remain substantial amounts of interracial interaction, in part because of the powerful effects of distance coupled with randomized freshman housing. Women are more likely to interact with other women. But conditional on there being any communication between a given woman and man, the volume of communication between them is large. The results show that even short-run residential mixing among people of different backgrounds promotes long-run social interaction among those people.

The NBER's Program on Asset Pricing met in Cambridge on November 14. Program Director John H. Cochrane, University of Chicago, and Tobias J. Moskowitz, NBER and Northwestern University, organized this program:

Lubos Pástor and Pietro Veronesi, NBER and University of Chicago, "Stock Prices and IPO Waves"
Discussant: Deborah J. Lucas, NBER and Northwestern University

Pástor and Veronesi explore why IPO volume changes over time and how it relates to stock prices. They develop a model of optimal IPO timing in which IPO volume fluctuates because of time variation in market conditions. IPO waves are caused by declines in expected market return, increases in expected aggregate profitability, or increases in prior uncertainty about the average future profitability of IPOs. The model makes numerous predictions for IPO volume, for example that IPO waves are preceded by high market returns and followed by low market returns. The data support these and other predictions.

Pavlova and Rigobon develop a simple two-country, two-good model in which the real exchange rate and prices of stocks and bonds are determined jointly. The model predicts that stock market prices are correlated internationally, even though their dividend processes are independent. This provides a theoretical argument in favor of financial contagion. The foreign exchange market serves as a propagation channel from one stock market to the other. The model identifies interconnections among stock, bond, and foreign exchange markets and characterizes their joint dynamics as a three-factor model. Contemporaneous responses of each market to changes in the factors have unambiguous signs. Most of the signs predicted by the model indeed obtain in the data, and the point estimates are in line with the implications of the theory. Moreover, the factors extracted from daily data on stock indexes and exchange rates explain a sizable fraction of the variation in a number of macroeconomic variables, and the estimated signs on the factors are consistent with the model's implications. The authors also derive agents' portfolio holdings and identify economic environments under which they exhibit a home bias. Finally, they show that an international CAPM obtaining in their model has two additional factors.

Routledge and Zin provide an axiomatic model of preferences over atemporal risks that generalizes Gul's disappointment-aversion model by allowing risk aversion to be "first order" at locations in the state space that do not correspond to certainty. Since the lotteries being valued by an agent in an asset-pricing context are not typically local to certainty, the authors'generalization, when embedded in a dynamic recursive utility model, has important quantitative implications for financial markets. They show that the state-price process, or asset-pricing kernel, in a Lucas-tree economy in which the representative agent has generalized disappointment aversion preferences is consistent with the pricing kernel that resolves the equity-premium puzzle. They also demonstrate that a small amount of conditional heteroskedasticity in the endowment-growth process is necessary to generate these favorable results. In addition, they show that risk aversion can be both state-dependent and counter-cyclical, which empirical research has demonstrated is necessary for explaining observed asset-pricing behavior.

Ang, Hodrick, Xing and Zhang, examine how volatility risk, both at the aggregate market and the individual stock level, is priced in the cross-section of expected stock returns. Stocks that have high sensitivities to innovations in aggregate volatility have low average returns, and a cross-sectional factor capturing systematic volatility risk earns -0.87 percent per month. The authors find that stocks with high idiosyncratic volatility have abysmally low returns. The quintile portfolio composed of stocks with the highest idiosyncratic volatilities does not even earn an average positive total return. The low returns earned by stocks with high exposure to systematic volatility risk, and the low returns of stocks with high idiosyncratic volatility, are not priced by the standard size, value, or momentum factors, and are not subsumed by liquidity or volume effects.

Beber and Brandt examine the effect of regularly scheduled macroeconomic announcements on the beliefs and preferences of participants in the U.S. Treasury market by comparing the option-implied state-price density (SPD) of bond prices shortly before and after the announcements. They find that the announcements reduce the uncertainty implicit in the second moment of the SPD, regardless of the content of the news. The changes in the higher-order moments, in contrast, depend on whether the news is good or bad for economic prospects. Using a standard model for interest rates to disentangle changes in beliefs and changes in preferences, the authors demonstrate that their results are consistent with time-varying risk aversion in the spirit of habit formation.

Hong, Scheinkman, and Xiong model the relationship between float (the tradeable shares of an asset) and stock price bubbles. Investors trade a stock that initially has a limited float because of insider lock-up restrictions but whose tradeable shares increase over time as these restrictions expire. A speculative bubble arises because investors, with heterogeneous beliefs caused by overconfidence and facing short-sales constraints, anticipate the option to resell the stock to buyers with even higher valuations. With limited risk absorption capacity, this resale option depends on float, as investors anticipate the change in asset supply over time and speculate over the degree of insider selling. The model yields implications consistent with the behavior of internet stock prices during the late 1990s, such as the bubble, share turnover, and volatility decreasing with float and stock prices tending to drop on the lock-up expiration date, although it is known to all in advance.

The NBER's Program on Corporate Finance met in Cambridge on November 14. Organizers Florencio Lopez de Silanes, NBER and Yale University, and Rafael La Porta, NBER and Harvard University, chose these papers to discuss:

Marianne Bertrand, NBER and University of Chicago; Antoinette Schoar, NBER and MIT; and David Thesmar, ENSAE, "Banking Deregulation and Industry Structure: Evidence from the French Banking Reforms of 1985"
Discussant: Paola Sapienza, Northwestern University

Bolton, Scheinkman, and Xiong present a multiperiod agency model of stock-based executive compensation in a speculative stock market, where investors are overconfident and stock prices may deviate from underlying fundamentals and include a speculative option component. This component arises from the option to sell the stock in the future to potentially over-optimistic investors. The authors show that optimal compensation contracts may emphasize short-term stock performance, at the expense of long-run fundamental value, as an incentive to induce managers to pursue actions that increase the speculative component in the stock price. This model provides a different perspective for the recent corporate crisis than the increasingly popular "rent extraction view" of executive compensation.

Financing decisions seem to violate the pecking order's central predictions about how often and under what circumstances firms issue equity. Specifically, most firms issue or retire equity each year, the issues on average are large, and they are not typically done by firms under duress. Fama and French estimate that during 1973-2001 the year-by-year equity decisions of more than half of their sample firms contradict the pecking order. And, contradictions are more common among larger firms.

Pérez-González investigates the importance of corporate "control" on the performance of foreign affiliates of multinational corporations (MNCs). Using detailed micro-level information from Mexico, he shows that manufacturing plants in which MNCs acquire majority ownership ("control") became more productive. To explore whether this link is causal, he uses the elimination of foreign majority ownership restrictions to study the performance of plants whose MNC ownership increased from minority to majority as a result. He finds that acquiring control is associated with large improvements in total factor productivity, and that enhanced performance is concentrated in industries that rely on technological innovations from their parent companies. He interprets the evidence as supportive of the property rights theory of the firm.

Bebchuk analyzes how asymmetric information affects which corporate governance arrangements firms choose - through the design of securities and corporate charters - when they go public. He shows that such asymmetry might lead firms to adopt corporate governance arrangements that are commonly known to be inefficient by both public investors and those taking firms public. When higher asset value is correlated with higher private benefits of control, asymmetric information about the asset value of firms going public will lead some or all such firms to offer a sub-optimal level of investor protection. The results may help to explain why charter provisions cannot be relied on to provide optimal investor protection in countries with poor investor protection; why companies going public in the United States commonly include substantial antitakeover provisions in their charters; and why companies rarely restrict self-dealing or the taking of corporate opportunities more than is done by the corporate laws of their country.

Bertrand, Schoar, and Thesmar investigate the effects of banking deregulation on firms' behavior, exit and entry decisions, and overall product market structure in the non-financial sectors. The authors use the deregulation of the French banking industry in 1985 as an economy-wide shock to the banking sector that affected all industries, but in particular those that relied most heavily on external finance and bank loans. The deregulation eliminated government interference in lending decisions, allowed French banks to compete more freely against each other in the credit market, and did away with implicit and explicit government subsidies for most bank loans. After deregulation, banks seem to tie their lending decisions more closely to firm performance. Low quality firms that suffer negative shocks do not receive large increases in bank credit anymore. Instead, these firms display a much higher propensity to undertake restructuring measures post-reform, for example, to reduce wages and outsource production. The authors also observe a strong increase in performance mean reversion after 1985, especially for firms that were hit by negative shocks. All of these results are particularly strong for firms in more bank-dependent industries. On the product-market side, there is a strong increase in asset reallocation in more bank-dependent industries, mostly coming from higher entry and exit rates in these sectors. There is also an increase in allocative efficiency across firms in these sectors, as well as a decline in concentration ratios.

Acemoglu and Johnson evaluate the importance of "property rights institutions," which protect citizens against expropriation by the government and powerful elites, and "contracting institutions," which enable private contracts between citizens. They exploit exogenous variation in both types of institutions driven by colonial history, and document strong first-stage relationships between property rights institutions and the determinants of European colonization strategy (settler mortality and population density before colonization), and between contracting institutions and the identity of the colonizing power. Using this approach, the authors find that property rights institutions have a first-order effect on long-run economic growth, investment, and financial development. Contracting institutions appear to matter only for the form of financial intermediation. A possible explanation for this pattern is that individuals often find ways of altering the terms of their formal and informal contracts to avoid the adverse effects of contracting institutions, but are unable to do so against the risk of expropriation.

Richard Murnane, NBER and Harvard University, and Claudia Uribe and John Willett, Harvard University, "Why do Students Learn More in Some Classrooms Than in Others? Evidence from Bogota"

Weili Ding, University of Michigan, and Steven F. Lehrer, University of Pennsylvania, "Estimating Dynamic Treatment Effects from Project STAR"

Patrick Bayer, Yale University; Fernando Ferreira; University of California, Berkeley; and Robert McMillan, University of Toronto, "A Unified Framework for Measuring Preferences for Schools and Neighborhoods"

David N. Figlio, NBER and University of Florida, "Names, Expectations, and Black Children's Achievement"

Kremer, Miguel, and Thornton examine the impact of a merit scholarship program for adolescent girls in Kenya in the context of a randomized evaluation. Girls in program schools were informed that if they scored well on a later academic exam their school fees would be paid and they would receive a large cash grant for the next two years. Girls eligible for the scholarship showed large gains in academic exam scores (average gain 0.2-0.3 standard deviations), and these gains persisted into the year following the competition. There is also evidence of positive externalities: girls with low baseline test scores (with less chance at the award) and boys (who were ineligible) showed sizeable test gains. Both student and teacher absenteeism fell in the scholarship schools, but there is no evidence of changes in students' self-perceptions or attitudes toward school.

Many studies have documented that children in some classrooms learn considerably more than children who spend the year in other classrooms at the same grade level. It has proven difficult, however, to determine the extent to which differences in student achievement across classrooms stem from differences in the quality of teachers, the quality of peer groups, class sizes, and governance structures (public versus private). Using a unique dataset providing longitudinal achievement data for a large sample of students who attended public or private elementary schools in Bogota, Colombia, Uribe, Murnane, and Willett examine the roles of teacher quality, class size, peer groups, and governance structure in predicting why, net of family background and prior achievement, the average achievement of children in some classrooms is much higher than that of children in other classrooms.

Ding and Lehrer consider the analysis of data from randomized trials which offer a sequence of interventions and suffer from a variety of problems in implementation. Their context is Tennessee's highly influential randomized class size study, Project STAR. The authors demonstrate how a researcher can estimate the full sequence of dynamic treatment effects using a sequential difference-in-difference strategy that accounts for attrition attributable to observables using inverse probability weighting M-estimators. These estimates allow them to recover the structural parameters of the small class effects in the underlying education production function and to construct dynamic average treatment effects. They present a complete and different picture of the effectiveness of reduced class size and find that accounting for both attrition caused by observables and selection caused by unobservables is crucial and necessary with data from Project STAR.

Bayer, Ferreira, and McMillan set out a framework for estimating household preferences over a broad range of housing and neighborhood characteristics, some of which are determined by the way that households sort in the housing market. This framework brings together the treatment of heterogeneity and selection that has been the focus of the traditional discrete choice literature, with a clear strategy for dealing with the correlation of unobserved neighborhood quality with both school quality and neighborhood sociodemographics. The authors estimate the model using rich data on a large metropolitan area, drawn from a restricted version of the Census. The estimates indicate that, on average, households are willing to pay an additional one percent in house prices - substantially lower than in prior work - when the average performance of the local school is 5 percent higher. Also, the full capitalization of school quality into housing prices is typically 70-75 percent greater than the direct effect. This is because a social multiplier, neglected in the prior literature, suggests that increases in school quality also raise prices by attracting households with more education and income to the corresponding neighborhood.

In much of the United States, school segregation is increasing even as residential segregation declines. Clotfelter, Ladd, and Vigdor present a model in which a school or district administrator actively manages the degree of interracial contact in public schools in order to accommodate the competing desires of stakeholders such as parents, teachers, and courts. The authors test the central implications of this model using data on the racial composition of every classroom in the state of North Carolina in the 2000-2001 school year. The results suggest that administrators act differently when deciding on policies influencing segregation between schools and within schools, consistent with the fact that judicial regulation usually applies only to racial balance between schools.

The Black-White test score gap widens considerably over the course of a child's school career. Figlio suggests that one explanation for this phenomenon may involve teachers' expectations of Black children. If teachers have lower expectations for Black children with racially-identifiable names, the grading standards literature suggests that these children will learn less, even when staying in school longer than Black children with more homogenized names. He uses unique data from a large Florida school district to study this issue. Comparing Black siblings, one with a racially identifiable name and the other with a more homogenized name, he finds that teachers apparently expect less from Black children with racially identifiable names. Further, these lower expectations apparently lead to lower standardized test scores, but not to fewer years of schooling attained. The results are robust to a variety of specification checks concerning the veracity of using sibling comparisons for identification. Figlio finds that the results are stronger for boys than for girls, and are stronger in schools where Black students are in the minority or where Black teachers are uncommon.

The NBER's Working Group on Behavioral Finance met in Cambridge on November 15. Group Directors Robert J. Shiller, NBER and Yale University, and Richard H. Thaler, NBER and University of Chicago, organized this program:

Josef Lakonishok, NBER and University of Illinois; Inmoo Lee, Korea University; and Allen M. Poteshman, University of Illinois, "Option Market Activity and Behavioral Finance"
Discussant: Nicholas C. Barberis, NBER and University of Chicago

Malcolm Baker, NBER and Harvard University, and Jeffrey Wurgler, NBER and New York University, "Investor Sentiment and the Cross-Section of Stock Returns"
Discussant: Owen Lamont, NBER and Yale University

Campbell, Polk, and Vuolteenaho show that growth stocks' cash flows are particularly sensitive to temporary movements in aggregate stock prices (driven by movements in the equity risk premium), while value stocks' cash flows are particularly sensitive to permanent movements in aggregate stock prices (driven by market-wide shocks to cash flows.) Thus the high betas of growth stocks with the market's discount-rate shocks, and of value stocks with the market's cash-flow shocks, are determined by the cash-flow fundamentals of growth and value companies. Growth stocks are not merely "glamour stocks" whose systematic risks are driven purely by investor sentiment.

Lakonishok, Lee, and Poteshman investigate the behavior of investors in the equity option market using a unique and detailed dataset of open interest and volume for all contracts listed on the Chicago Board Options Exchange over the 1990 through 2001 period. They find that for both calls and puts the short open interest of non-market maker investors is substantially larger than the long open interest. They also find that all types of non-market maker investors display trend-chasing behavior in their option market activity. In addition, they show that the least sophisticated group of investors substantially increased their purchases of calls on growth stocks during the stock market bubble of the late 1990s and early 2000 while none of the investor groups significantly increased their purchases of puts during the bubble in order to overcome short sales constraints in the stock market. A number of these findings are consistent with option market investors being loss averse, framing over narrow segments of their portfolios, and attempting to avoid financial decisions that they will later regret.

Baker and Wurgler examine how investor sentiment affects the cross-section of stock returns. Theory predicts that a broad wave of sentiment will disproportionately affect stocks whose valuations are highly subjective and are difficult to arbitrage. The authors test this prediction by studying how the cross-section of subsequent stock returns varies with proxies for beginning-of-period investor sentiment. When sentiment is low, subsequent returns are relatively high on smaller stocks, high volatility stocks, unprofitable stocks, non-dividend-paying stocks, extreme-growth stocks, and distressed stocks, consistent with an initial underpricing of these stocks. When sentiment is high, on the other hand, these patterns attenuate or fully reverse. These results are consistent with the theory and are unlikely to reflect an alternative explanation based on compensation for systematic risks.

Do investors pay attention to long-term fundamentals? Della Vigna and Pollet consider the case of demographic information. Large cohorts, such as the baby boom, generate forecastable positive demand changes over time to the toys, bicycle, beer, life insurance, and nursing home sectors, to name a few. These demand changes are predictable once a specific cohort is born. In this paper, the authors use lagged consumption and demographic data to forecast future consumption demand growth induced by changes in age structure. They find that these demand forecasts predict profitability by industry. Moreover, forecasted demand growth 5 to 10 years into the future predicts one-year returns by industry. An additional one percentage point of annualized demand growth attributable to demographics induces a 4 to 6 percentage point increase in annual abnormal industry stock returns. The forecastability is stronger for concentrated industries and for the more recent time period. Forecasted consumption growth 0 to 5 years into the future, on the other hand, does not predict stock returns. The results are consistent with short-sightedness with respect to long-run information.

Traditional economic analysis of markets with asymmetric information assumes that the uninformed agents account for the incentives of the informed agents to distort information. Malmendier and Shanthikumar analyze whether investors in the stock market are able to account for such incentives. Security analysts provide investors with information about investment opportunities by issuing buy and sell recommendations. The recommendations are likely to be biased upwards, in particular if an analyst is affiliated with an investment bank that is a recent underwriter of the recommended firm. Using the trading data from the New York Stock Exchange Trades and Quotations database (TAQ), the authors find that large (institutional) investors generate abnormal volumes of buyer-initiated trades after a positive recommendation only if the analyst is unaffiliated. Small traders exert abnormal buy pressure after all positive recommendations, including those of affiliated analysts. The trading behavior of small analysts implies losses, since stocks recommended by affiliated analysts perform significantly worse than those recommended by unaffiliated analysts. These results imply that larger investors account for the distortions of recommendations, but small (individual) investors do not. Increased competition among analysts does not remedy the informational distortion or investor reactions.

Brav, Graham, and Michaely survey 384 CFOs and treasurers, and conduct in-depth interviews with an additional two dozen, to determine the key factors that drive dividend and share repurchase policies. The authors find that managers are very reluctant to cut dividends, that dividends are smoothed through time, and that dividend increases are tied to long-run sustainable earnings but much less so than in the past. Rather than increasing dividends, many firms now use repurchases as an alternative. Managers view paying out with repurchases as more flexible than using dividends, permitting a better opportunity to optimize investment. Managers like to repurchase shares when they feel their stock is undervalued and in an effort to affect EPS. Dividend increases and the level of share repurchases generally are paid out of residual cash flow, after investment and liquidity needs are met. Financial executives believe that retail investors have a strong preference for dividends, in spite of the tax disadvantage relative to repurchases. In contrast, executives believe that institutional investors as a class have no strong preference between dividends and repurchases. In general, management views provide at most moderate support for agency, signaling, and clientele hypotheses of payout policy. Tax considerations play only a secondary role. By highlighting where the theory and practice of corporate payout policy are consistent and where they are not, the authors attempt to shed new light on important unresolved issues related to payout policy in the 21st century.

Amil Petrin, NBER and University of Chicago, and James A. Levinsohn, "On the Micro Foundations of Productivity Growth"
Discussant: John C. Haltiwanger, NBER and University of Maryland

Chad Syverson, NBER and University of Chicago; Lucia S. Foster, Census Bureau; and John Haltiwanger, "Reallocation, Firm Turnover, and Efficiency: Selection on Productivity or Profitability?"
Discussant: James Tybout, NBER and Pennsylvania State University

Van Biesebroeck compares five widely used techniques for estimating productivity: index numbers; data envelopment analysis; and three parametric methods -- instrumental variables estimation, stochastic frontiers, and semi-parametric estimation. He compares them both directly and in terms of three productivity debates using a panel of manufacturing plants in Colombia. The different methods generate surprisingly similar results. Correlations between alternative productivity estimates invariably are high. All methods confirm that exporters are more productive than others on average and that only a small portion of the productivity advantage is attributable to scale economies. Productivity growth is correlated more strongly with export status, frequent investments in capital equipment, and employment of managers than with the use of imported inputs or foreign ownership. On the debate as to whether aggregate productivity growth is driven by plant-level changes or output share relocation, all methods point to the importance of plant-level changes, in contrast to results from the United States.

Wolfram and her co-authors assess whether an impending restructuring of the electricity industry in the home state of an investor-owned utility encourages that utility to improve efficiency at its generating plants. Under cost-plus regulation, utilities have little incentive to reduce operating costs since they can pass them directly to ratepayers. Restructuring programs increase utilities' exposure to competitive markets for wholesale electricity and ultimately, to competition for their retail customers. Many restructuring programs have been preceded or accompanied by transitional rate freezes, essentially placing the utility under price cap regulation. The authors test the impact of these changes on the operating efficiency of electric generating plants. Using annual plant-level data, they compare changes in non-fuel operating expenses and the number of employees in states that moved quickly to deregulate wholesale markets to those in states that have not pursued restructuring. Their results suggest that utilities in states enacting restructuring may have reduced nonfuel operating expenses; evidence on employment is mixed. Production function estimates suggest no significant changes in fuel efficiency, and provide mixed evidence of changes in the efficiency of labor and maintenance activity.

Bajari, Benkard, and Krainer develop a new approach to measuring changes in consumer welfare attributable to changes in the price of owner-occupied housing. They define an agent's welfare adjustment as the transfer required to keep expected discounted utility constant given a change in current home prices. The authors demonstrate that, up to a first-order approximation, price increases in the existing housing stock cause no aggregate change in welfare. This follows from a simple market-clearing condition: capital gains experienced by sellers are exactly offset by welfare losses to buyers. Welfare losses can occur, however, from price increases in new construction and renovations. The authors show that this result holds (approximately) even in a model that accounts for changes in consumption and investment plans prompted by current price changes. They estimate the welfare cost of house price appreciation to be an average of $127 per household per year over the 1984-98 period.

Levinsohn and Petrin show that the traditional approach to aggregating plant-level productivity has no well-defined unit of measurement. They propose a simple measure that has a readily interpreted economic magnitude. They then describe conditions that must be satisfied by a decomposition of plant-level productivity growth in order to separately identify rationalization effects from real productivity effects. The authors show that the seemingly innocuous choice of a decomposition actually can reverse the economic conclusions one draws. They provide new suggestions for exploring micro-foundations that are complementary to the usual decompositions, and that can be particularly useful when these decompositions fail. Finally, they use recent Chilean data spanning 10 years (1987-96) to illustrate their concerns.

Is selection driven by efficiency or market power differences? Foster, Haltiwanger, and Syverson investigate the nature of selection using data from industries in which they observe both establishment-level quantities and prices. They find, as has been shown in the earlier literature for revenue-based TFP measures, that physical productivity and prices also exhibit considerable within-industry variation. They also show that while physical productivity shares common traits with revenue-based measures, there are important differences. These involve the productivity levels of entrants relative to incumbents and the size of the impact of net entry on productivity aggregates. Furthermore, the authors characterize the dimension(s) of selection and show that both idiosyncratic productivity and demand (price) conditions affect businesses' survival probabilities.

The NBER's Program on International Trade and Investment met at the Bureau's California office on December 5. ITI Program Director Robert C. Feenstra, also of University of California, Davis, organized this program:

James Anderson, NBER and Boston College, and Eric Van Wincoop, NBER and University of Virginia, "Trade Costs"

Christian Broda, Federal Reserve Bank of New York, and David E. Weinstein, NBER and Columbia University, "Globalization and the Gains from Variety"

Volker Nocke, University of Pennsylvania, and Stephen R. Yeaple, NBER and University of Pennsylvania, "Mergers and the Composition of International Commerce"

Paul Bergin and Alan M. Taylor, NBER and University of California, Davis; and Reuven Glick, Federal Reserve Bank of San Francisco, "Productivity, Tradability, and The Great Divergence"

Wolfgang Keller, NBER and University of Texas, and Carol H. Shiue, University of Texas, "The Origins of Spatial Interaction" (NBER Working Paper No. 10069)

Anderson and Van Wincoop survey trade costs - what we know about them, and what we don't know but may attempt to find out. Partial and incomplete data on direct measures of costs go together with inference on implicit costs from the pattern of trade across countries. Representative margins for full trade costs in rich countries exceed 170 percent. Poor countries face even higher trade costs. There is a lot of variation across countries and across goods within countries, much of which makes economic sense.

Broda and Weinstein show that the unmeasured growth in product variety from U.S. imports has been an important source of gains from trade over the last three decades (1972-2001). Using extremely disaggregated data, the authors show that the number of imported product varieties has increased by a factor of four. They also estimate the elasticities of substitution for each available category at the same level of aggregation, and describe their behavior across time and SITC-5 industries. Using these estimates, the authors develop an exact price index and find that the upward bias in the conventional import price index is approximately 1.2 percent per year - double the estimated impact attributable to hedonic adjustments on the CPI. The magnitude of this bias suggests that the welfare gains from variety growth in imports alone are 2.8 percent of GDP per year.

Nocke and Yeaple provide the first theory that conceptually distinguishes between the two modes in which firms can engage in Foreign Direct Investment (FDI): greenfield investment and cross-border mergers & acquisitions (M&A). In this model, firms differ in their productive capabilities, which can be decomposed into two complementary sets: mobile capabilities that can be transferred abroad, and immobile capabilities that cannot. In contrast to greenfield FDI, cross-border mergers allow a firm access to a foreign firm's country-specific capabilities, and result in the greatest degree of integration into the foreign market. The authors study how firms with different capabilities select different modes of foreign market access: cross-border M&A, greenfield FDI, and exports. The degree to which firms differ in their mobile and immobile capabilities plays a crucial role for the composition of international commerce: depending on whether firms differ in the mobile or immobile capabilities, cross-border mergers may involve the most or the least productive firms. A similar dichotomy obtains when analyzing the effects of country and industry characteristics on the average productivity of firms.

Mitra and Trindade focus on the role of inequality in the determination of trade flows and patterns. With nonhomothetic preferences, when countries are similar in all respects but asset inequality, trade is driven by specialization in consumption, not production, the authors find. These assumptions allow them to generate some interesting international spillover effects of redistributive policies. They also look at the effects of combining inequality and endowment differences on trade flows, and see that this has implications for "the mystery of the missing trade." Next they study a model of monopolistic competition, and find a novel V-shaped relationship between the ratio of inter-industry to intra-industry trade and a country's inequality. Finally, they look at how international differences in factor endowments affect this relationship between the ratio of inter- to intra-industry trade and inequality. Their theory formalizes as well as modifies Linder's conjecture about the relationship between intra-industry trade and the extent of similarity between trading partners.

Economists emphasize the benefits from free trade attributable to international specialization, but typically only narrowly measure what matters to individuals. Critics of free trade, by contrast, focus on the pattern of consumption in society and the nature of goods being consumed, but often fail to take into account the gains from specialization. Janeba develops a new framework to study the effects of trade liberalization on cultural identity and trade in cultural goods. He first describes the process of trade liberalization in the audiovisual sector with an emphasis on the film industry. Traditional political economy approaches or increasing-returns-to-scale models cannot account for the extent and type of state interventions throughout the world. In the theoretical model, cultural identity emerges as the result of the interaction of individual consumption choices. In a Ricardian model of international trade, Janeba shows, trade is not Pareto inferior to autarky, is not Pareto superior to autarky (if the world is culturally diverse under free trade), and everybody within a country can lose from free trade if the country is culturally homogenous under autarky.

Ghironi and Melitz develop a stochastic, general equilibrium, two-country model of trade and macroeconomic dynamics. Productivity differs across individual, monopolistically competitive firms in each country. Firms face some initial uncertainty concerning their future productivity when making an irreversible investment to enter the domestic market. In addition to the sunk entry cost, firms face both fixed and per-unit export costs. Only a subset of relatively more productive firms export, while the remaining, less productive firms only serve their domestic market. This microeconomic structure endogenously determines the extent of the traded sector and the composition of consumption baskets in both countries. Exogenous shocks to aggregate productivity, sunk entry costs, and trade costs induce firms to enter and exit both their domestic and export markets, thus altering the composition of consumption baskets across countries over time. The microeconomic features have important consequences for macroeconomic variables. Macroeconomic dynamics, in turn, feed back into firm-level decisions, further altering the pattern of trade over time. This model generates deviations from purchasing power parity that would not exist without a microeconomics structure with heterogeneous firms. It provides an endogeneous, microfounded explanation for a Harrod-Balassa-Samuelson effect in response to aggregate productivity differentials and deregulation. In addition, the deviations from purchasing power parity triggered by aggregate shocks display substantial endogenous persistence for very plausible parameter values, even when prices are fully flexible.

For the first-generation models of very long-run growth, empirical success has been mixed. As growth theory now moves beyond one-sector, one-country models, the "industrial revolution" typically is mapped onto a modern-traditional goods dichotomy with differential productivity in rich and poor countries. Bergin, Glick, and Taylor argue for the usefulness of an alternative framework, in which goods are differentiated by tradability and productivity. The two characteristics can interact, and can help to explain an important, but little noticed stylized fact: that, over time, the Balassa-Samuelson effect is getting stronger and stronger. Previous studies have missed this fact. Theorists have not employed it as a check on the model. Being unnoticed, it has gone unexplained. The authors employ a Ricardian continuum-of-goods model to explain this fact and find that endogenous tradability allows for theory and history to be consistent under a wide range of underlying productivity shocks. Moreover, this theory could illuminate studies of economic growth, both past and present.

Geography shapes economic outcomes in a major way. Keller and Shiue use spatial empirical methods to detect and analyze trade patterns in a historical dataset on Chinese rice prices. Their results suggest that spatial features were important for the expansion of interregional trade. Geography dictates, first, over what distances trade was possible in different regions, because the costs of ship transport were considerably less than those for land transport. Spatial features also influence the direction in which a trading network is expanding. Moreover, this analysis captures the impact of new trade routes, both within and outside the trading areas.

The NBER has now published over 10,000 titles in its Working Paper Series. Virtually all of these papers are available at www.nber.org/papers. The papers are free to residents of third world countries, who generate about one third of our 4,000 daily downloads.

Finding hard copies of the early papers, which did not exist in electronic format, involved some detective work on the part of NBER Research Associate Dan Feenberg and his assistant Inna Shapiro. As Feenberg describes: "When we started putting up the full text of Working Papers in 1996, there was no question in my mind that we wanted to have all working papers back to Number 1 online, but the cost was an obstacle. I did realize that we would get requests for older papers, so we put a note on the website offering to scan and make available any paper for $10. This was surprisingly successful, in that anywhere from 5 to 20 requests came in each week, and my assistant Inna Shapiro scanned them in-house."

Nonetheless, he continues, "almost 100 papers were completely missing from our files. We found about 60 of them at Harvard's Littauer Library and many of the remaining ones at MIT. The IMF Library came up with five papers, and the Federal Reserve Bank of Philadelphia's Library with three, including the crucial NBER Working Paper Number 1, a 132-page blockbuster. At the moment, the only paper clearly still missing is Number 25, "The Covariance Stucture of Earnings and the On the Job Training Hypothesis" by John Hause, issued in December 1973. (So, if you have a copy, we'd like to borrow it.)"

The NBER will deliver via email a message about newly available papers to anyone who registers at www.nber.org/new.html.