Previous research argues that earnings quality, measured as the unsigned abnormal accruals, proxies for information asymmetries that affect cost of capital. We examine this argument directly in two stages. In the first stage, we estimate the firm's exposure to an earnings quality factor in the context of a Fama-French three factor model augmented by the return on a factor-mimicking portfolio that is long in low earnings quality firms and short in high earnings quality firms. In the second stage, we examine whether the earnings quality factor is priced and whether insider trading is more profitable for firms with higher exposure to that factor. Generally speaking, we find evidence consistent with pricing of the earnings quality factor and insiders trading more profitably in firms with higher exposure to that factor.

This note extends the Bernardo, Talley & Welch (1999) model of legal presumptions to study situations where litigation efforts are spent sequentially rather than simultaneously. The equilibria of the litigation stage are presented as functions of the underlying presumption. The equilibria and comparative statics are shown to be qualitatively similar to those of the simultaneous version. However, sequentiality allows the principal to pre commit to a litigation strategy, and thus possibly preempt any litigation effort whatsoever by the agent.

We develop a model of a two-division firm in which the "strong" division has, on average, higher quality investment projects than the "weak" division. We show that the firm optimally biases its project selection policy in favor of the weak division and this bias is stronger when there is a greater spread in average project quality. The cost of such a policy is that the firm sometimes funds an inferior project but the benefit is that it motivates the manager of the strong division to set (and meet) more aggressive cash flow targets.

We consider an ascending auction to sell the elements of a matroid. The value of each element is private information to the bidders. Bidding sincerely is an equilibrium of the auction and the elements sold form a maximum weight basis of the matroid. As a corollary we obtain the ascending auction by Ausubel (2004) for selling homogeneous goods with decreasing marginal values.

Policymakers often express concern that herding by financial market participants destabilizes markets and increases the fragility of the financial system. This paper provides an overview of the recent theoretical and empirical research on herd behavior in financial markets. It addresses the following questions: What precisely do we mean by herding? What could be the causes of herd behavior? What success have existing studies had in identifying such behavior? And what effect does herding have on financial markets?

We show that the introduction of a new asset affects the prices of previously existing assets in a market. Using data from 254 IPOs in 22 emerging markets, we find that stocks in industries that covary highly with the industry of the IPO experience a larger decline in prices relative to other stocks during the month of the IPO. The effects are stronger when the IPO is issued in a market that is less integrated internationally, and when the IPO is big. The evidence supports the idea that the composition of asset supply affects the cross-section of stock prices.

An extraordinary amount of research has been conducted on the determinants and consequences of the development of banking systems and stock markets in the last 10 years. This contrasts sharply with the limited attention the development of corporate bond markets has received, despite its enormous growth during the past decade. Two main reasons for this phenomenon can be given. First, research on the issue of financial development has advanced under the presumption that the internal/external financing margin is critical for the understanding of its consequences. Given that for the vast majority of firms external finance takes the form of bank credit this has led to the identification of financial development with bank system development on relevance grounds. The second reason has been lack of data. The small amount of research conducted so far has focused primarily on the development of markets for bonds issued by the public sector where data are more plentiful2. Important questions remain unanswered.

We construct a time-variant, price-based measure of trade distortions in 28 manufacturing industries for a large sample of countries over the last four decades, documenting facts consistent with the argument that changes in relative prices are an important channel through which an economy's openness affects outcomes. First, we find that price distortions at the aggregate level are negatively associated with the degree of openness and the speed at which trade volume, exports, and imports grows. The negative association between trade outcomes and price distortions are fundamentally driven by the positive link between openness and price distortions. Second, increases in trade around the time of liberalization are positively related to a decline in aggregate price distortions for the sample of trade-liberalizing countries. Third, the extent of production reallocation is positively associated with the degree of relative price changes that occur around the time of liberalization, particularly across constant-return-to-scale sectors. Fourth, changes in relative prices at the industry level within countries are positively correlated with changes in output, productivity, and firm size, and negatively associated with the number of firms. Finally, we show that the neoclassical theory of trade does a poor job explaining how relative prices change during trade liberalization.

We estimate the parameters of pricing kernels that depend on both aggregate wealth and state variables that describe the investment opportunity set, using FTSE 100 and S&P 500 index option returns as the returns to be priced. The coefficients of the state variables are highly significant and remarkably consistent across specifications of the pricing kernal, and across the two markets. The results provided further strong evidence, which is consistent with Merton's (1973a) Intertemporal Capital Asset Pricing Model, that state variables in addition to market risk are priced.

We propose a new model to describe consideration, consisting of a multivariate probit model component for consideration and a multinomial probit model component for choice, given consideration. The approach allows one to analyze stated consideration set data, revealed consideration set (choice) data or both, while at the same time it allows for unobserved dependence in consideration among brands. In addition, the model accommodates different effects of the marketing mix on consideration and choice, an error process that is correlated over time, and unobserved consumer heterogeneity in both processes. We attempt to establish the validity of existing practice to infer consideration sets from observed choices in panel data. To this end, we collect data in an on-line choice experiment involving interactive supermarket shelves and post-choice questionnaires to measure the choice protocol and stated consideration levels. We show with these experimental data that underlying consideration sets can be reliably retrieved from choice data alone. Next, we estimate the model on IRI panel data. We have two main results. First, compared with the single-stage multinomial probit model, promotion effects are larger when they are included in the consideration stage of the two-stage model. Second, we find that consideration of brands does not covary greatly across brands once we account for observed effects.

This paper investigates whether dividends provide information about earnings quality. Specifically, we examine whether firms that have lower earnings quality, as measured by an accusation of fraud in a Securities and Exchange Commission Accounting and Auditing Enforcement Release, pay dividends less often (and/or increase dividends less often) than similar firms not accused of accounting fraud. Our results are consistent with the alleged fraud firms being less likely to pay dividends prior to the fraud years. This relation is robust to the inclusion of controls for factors thought to be associated with fraud and dividend policy (e.g., growth, leverage, volatility, age of the firm, and others). We obtain similar, although somewhat weaker, results when we examine the dividends paid during the fraud years and the frequency of dividend increases. Thus, overall the evidence is consistent with dividends indicating earnings quality. However, the data also reveal that the alleged fraud firms pay out a total of over $10.5 billion in dividends, or nearly 3% of their pre-fraud market value, while perpetrating the financial accounting fraud. Thus, while dividends do convey information about earnings quality on average, they do not constitute a preventative measure against financial accounting fraud.

Personal preferences and financial incentives make homeownership desirable for most families. Once a family purchases a home they find it impractical (costly) to frequently change their ownership of residential real estate. Thus, by deciding how much home to buy, a family constrains their ability to adjust their asset allocation between residential real estate and other assets. To analyze the impact of this constraint on consumption, welfare, and post-retirement wealth, we first investigate a representative individual's optimal asset allocation decisions when they are subject to a "homeownership constraint." Next, we perform a "thought experiment" where we assume the existence of a market where a homeowner can sell, without cost, a fractional interest in their home. Now the housing choice decision does not constrain the individual's asset allocations. By comparing these two cases, we estimate the differences in post-retirement wealth and the welfare gains potentially realizable if asset allocations were not subject to a homeownership constraint. For realistic parameter values, we find that a representative homeowner would require a substantial increase in total net worth to achieve the same level of utility as would be achievable if the choice of a home could be separated from the asset allocation decision.

We present a simple, easy to implement methodology for pricing microfinance loans and loan guarantees using publicly available data on loan write-offs by Micro Finance Institutions (MFIs). Our methodology takes into account the selection bias inherent in available data in that MFIs that do not report loan write-off data are less likely to be better performers. Our quantitative analysis is consistent with pricing seen in a recent securitization deal. Our analysis suggests how securitization and loan guarantees can greatly expand the supply of funds for microfinance loans.

We present a simple model in which a borrower with no collateral can either borrow from a moneylender with enforcement technology or from other potential lenders who do not have an enforcement technology and must rely on self-enforcing contracts to ensure that the borrower does not default voluntarily. We show that with many potential lenders without an enforcement technology, an equilibrium does not exist in which they will lend to borrowers without collateral unless they can commit not to extend loans to a borrower who has defaulted on a loan to any other lender without the enforcement technology. We argue that franchising can offer such a commitment technology. We show that with both the moneylender and franchisee lenders, the borrower is induced to borrow from the moneylender first even though some franchisee would have charged a smaller interest rate. The borrower welfare is higher if a moneylender is present in addition to franchisee lenders.

We study the problem of the manager of a project consisting of two sub-projects or tasks which are outsourced to different subcontractors. The project manager earns more revenue from the project if it is completed faster, but he cannot observe how hard subcontractors work, only the stochastic duration of their tasks. We derive the optimal linear incentive contracts to offer to the subcontractors when the tasks are conducted in series or in parallel. We compare them to the fixed-price contracts often encountered in practice, and discuss when incentive contracts lead to bigger performance improvement. We characterize how the incentive contracts vary with the subcontractors' risk aversion and cost of effort, the marginal effect of subcontractor effort, and the variability of task durations. We find that this dependence is sometimes counter-intuitive in nature. For instance, for parallel tasks, if the first agent's task is on the critical path and his variability increases, the project manager should induce the first agent to work less hard and the second agent to work harder.

We study competition between two multi-product firms with distinct production technologies in a market where customers have heterogeneous preferences on a single taste attribute. The mass customizer (MC) has a perfectly flexible production technology, thus can offer any variety within a product space, represented by Hotelling's (1929) linear city. The mass producer (MP) has a more focused production technology, and therefore, it offers a finite set of products in the same space. MP can invest in more flexible technology, which reduces its cost of variety and hence allows it to offer a larger set of products; in the extreme, MP can emulate MC's technology and offer infinite variety. The firms simultaneously decide whether to enter the market, and MP chooses its degree of product-mix flexibility upon entry. Next, MP designs its product line, i.e., the number and position of its products; MC's perfectly flexible technology makes this unnecessary. Finally, both firms simultaneously set prices. We analyze the sub-game perfect Nash equilibrium in this three-stage game, allowing firm-specific fixed and variable costs that together characterize their production technology. We find that an MP facing competition from an MC offers lower product variety compared to an MP monopolist, in order to reduce the intensity of price competition. We also find that MP can survive this competition even if it has higher fixed cost of production technology or higher marginal cost of production or both.

We propose practical methods to test whether respondents use non-compensatory processes and, if so, to infer the details of those processes from either consideration-then-choice or full-rank tasks. Inference is a non-trivial combinatorial problem which has hitherto been computationally infeasible for practical problems. "Greedoid languages" provide a structure and theory to transform this problem and decrease estimation time by a factor of approximately 109 for practical 16-aspect problems. Monte Carlo experiments suggest that it is feasible to infer, albeit with noise, the process that respondents use to evaluate product profiles. We test Greedoid inference empirically by asking 481 respondents to evaluate Smart-Phones. The data suggest that most respondents use a lexicographic strategy in this category. Interestingly, allowing respondents to sort profiles by features does not induce more lexicography. Two additional experiments use the Greedoid inference engine to (1) replicate the result that non-compensatory processes become more likely as the number of products increases and (2) examine the impact of pruning profiles adaptively so that the only feasible rank-orders are those that are consistent with linear combinations of partworths. Greedoid methods also provide a non-compensatory conjoint-like method to forecast consumer response and to find the minimum feature levels required to enter a product category. This method is particularly useful in categories where consumers are presented with large numbers of potential choices. Not only do Greedoid estimates appear to predict better (in most cases) than purely compensatory estimates, but the consideration-then-choice task is perceived by respondents as more enjoyable, more accurate, and more interesting. It also saves time and increases completion rates - both of which translate directly into cost savings.

Research on the nanoscale has revolutionized areas of science and has begun to have an impact on, and be impacted by, society and economy. We are capturing early traces of these processes in NanoBank, a large scale, multi-year project to provide a public data resource which will link individuals and organizations involved in creating and using nano S&T across a number of activities including publishing, patenting, research funding, and commercial financing, innovation and production. We report preliminary results from our work in progress. Nanotechnology is on a similar trajectory to biotechnology in terms of patents and publication, already accounting for over 2.5% of scientific articles and 0.7% of patents. Joint university-firm research is widespread and increasing. Regional agglomeration is also evident in both science and commercial applications, with the main clusters of firm entry by both new and pre-existing firms forming around major research universities publishing in nanoscience. Nanoscience has been highly concentrated in the United States, a few European countries, and Japan, but China has recently passed Japan in total articles per year and is beginning to have a significant number of highly-cited articles.

Even in the wake of the most sweeping campaign finance reform law to be enacted in three decades, further significant reform is inevitable. Special interest money continues to flow through loopholes in the Act, and the Presidential Election Campaign Fund is near collapse. The next reform should encourage broader participation in the political process by individual citizens, both to dilute the power of special interests and to serve independent democratic values that recent Supreme Court jurisprudence has identified as vital to meaningful reform. We propose adopting a refundable tax credit of $100/taxpayer for political contributions to federal candidates and national parties; the credit would be targeted to lower- and middle-income Americans. A refundable tax credit is equivalent to giving each eligible citizen up to $100 annually to use for political contributions. We also present data about the relative importance of political contributions by special interests (corporate, labor and other PACs) and individuals that undermine many of the assumptions on which past reform has been based and that have not been discussed in the legal literature. The data clearly show that small contributions by individuals are the dominant source of money in campaigns, and that the influence of special interest money is subtle, appearing to "purchase" benefits like access, a place on the agenda, and minor policy details. Working from an accurate picture of who really pays for politics, and drawing from the experience at the federal and state levels with similar tax refund programs, we present the tax credit as a reform that is simple, easy to administer, and likely to improve political participation by average Americans. Thus, our proposal, unlike the complicated voucher plan with anonymity put forward by Ackerman and Ayres, is likely to be adopted by Congress; moreover, it will appeal to a bipartisan consensus because it mixes public funding with a decentralized allocation mechanism using a tax subsidy.

This paper examines the effect of judicial ideology on the selection and outcome of telecommunications regulatory cases. Using a dataset on Federal Communications Commission orders and trials from 1990 to 1995, this paper shows that changes in the make-up of the bench of the D.C. Circuit Court of Appeals affects not only who wins the cases, but also the cases selected for litigation. Specifically, firms are more likely to bring cases when the agency decisions are ideologically distant from the bench than when the two actors are close ideologically. Judges, who are subsequently randomly selected, vote ideologically as the firms' actions predict they will, with Republicans judges overturning Democratic agency decisions and vice versa. The effect of judicial ideology on case selection is larger than the effect of judicial ideology on case outcomes. Additionally the paper also shows that plaintiff characteristics have little impact in determining case outcomes, but a statistically significant impact on cases selected for litigation. Finally, the paper provides initial evidence that regulatory uncertainty may lead to more litigation.

Innovative industries are often characterized by rapid product turnover. Product longevity may be driven by both a product's position within a market as well as its position within a firm's larger product portfolio. However, we have little understanding of the relative importance of these factors in determining product turnover and how they interact as an industry evolves. Although researchers have invested substantial effort in analyzing firm survival and turnover, there are far fewer studies of the determinants of product survival and turnover. We use hazard rate models and count regression models to describe the behavior of firms and their products with a new and detailed database on the laser printer industry. We show, first, that competition and market structure variables have a large impact on both speeding product exit and delaying product entry. Second, there is some evidence that firms that have maintained a high market share for a number of years keep their products on the market longer than those with lower market share. Finally, firms with high innovative capacity tend to enter markets frequently, but withdraw their products at average rates. Firms with strong brands tend to introduce few products and withdraw their products slowly. With these findings, the paper links product entry and exit decisions to the broader literature on firm strategic and product management.

This research investigated motivational influences associated with age on responses to emotional advertisements. Experiment 1 showed increased liking and recall of emotional ads among older consumers, and that time horizon perspective moderates these age-related differences. Experiment 2 revealed influences of age and time horizon perspective on responses to different types of emotional ads. Ads focusing on avoiding negative emotions were liked and recalled more among older consumers and among young consumers made to have a limited time horizon perspective. This research illustrates the importance of considering age-related differences in information processing due to motivational as well as to cognitive changes.

We use high frequency data and a new econometric methodology to evaluate the effectiveness of controls on capital inflows. We focus on Chile's experience during the 1990s and investigate whether controls on capital inflows reduced Chile's vulnerability to external shocks. We recognize that changes in the controls will affect the way in which different macro variables relate to each other. We take this problem seriously, and we develop a methodology to deal explicitly with it. The main findings may be summarized as follows: (a) A tightening of capital controls on inflows depreciates the exchange rate. (b) We find that the "vulnerability" of the nominal exchange rate to external factors decreases with a tightening of the capital controls. And (c), we find that a tightening of capital controls increases the unconditional volatility of the exchange rate, but makes this volatility less sensitive to external shocks.

In this paper I use a broad multi-country data set to analyze the relationship between restrictions to capital mobility and external crises. The analysis focuses on two manifestations of external crises: (a) sudden stops of capital inflows; and (b) current account reversals. I deal with two important policy-related issues: First, does the extent of capital mobility affect countries' degree of vulnerability to external crises; and second, does the extent of capital mobility determine the depth of external crises - as measured by the decline in growth - once the crises occur? Overall, my results cast some doubts on the assertion that increased capital mobility has caused heightened macroeconomic vulnerabilities. I find no systematic evidence suggesting that countries with higher capital mobility tend to have a higher incidence of crises, or tend to face a higher probability of having a crisis, than countries with lower mobility. My results do suggest, however, that once a crisis occurs, countries with higher capital mobility may face a higher cost, in terms of growth decline.

In this paper I analyze the role of foreign advisors in stabilization programs. I discuss from an analytical perspective why foreigners may help a developing country's government put in place a successful stabilization program. This framework is used to analyze Chile's experience with antiinflationary policies in the mid 1950s. In 1955-58 Chile implemented a stabilization package with the advice of the U.S. consulting firm of Klein-Saks. The Klein-Saks program took place in a period of acute political confrontation. After what was considered to be an initial success -- inflation declined from 85% in 1955 to 17% in 1957 -- the program failed to achieve durable price stability. I argue that the foreign advisors of the Klein-Saks Mission gave initial credibility to the stabilization program launched in 1955. But providing initial credibility was not enough to ensure success. Congress failed to act decisively on the fiscal front. Consequently the fiscal imbalances that had plagued Chile for a long time were reduced, but not eliminated. I present empirical results on the evolution of inflation, exchange rates and interest rates that support my historical analysis.

In this paper I analyze the relationship between the U.S. dollar and the U.S. current account. I deal with issues of sustainability, and I discuss the mechanics of current account adjustment. The analysis presented in this paper differs from other work in several respects: First, I emphasize the dynamics of the current account adjustment, going beyond computations of the "required" real depreciation of the dollar to achieve sustainability. I show that even if foreigners' (net) demand for U.S. assets continues to increase significantly, the current account deficit is likely to experience a large decline in the (not too distant) future. Second, I rely on international evidence to explore the likelihood of an abrupt decline in capital flows into the U.S. And third, I analyze the international evidence on current account reversals, to investigate the potential consequences of a (possible) sudden stop of capital flows into the U.S. This analysis suggests that the future adjustment of the U.S. external accounts is likely to result in a significant reduction in growth.

The future of the U.S. current account - and thus of the U.S. dollar - depend on whether foreign investors will continue to add U.S. assets to their investment portfolios. However, even under optimistic scenarios, the U.S. current account deficit is likely to go through a significant reversal at some point in time. This adjustment may be as large as 4% to 5% of GDP. In order to have an idea of the possible consequences of this type of adjustment, I have analyzed the international evidence on current account reversals using both non-parametric techniques as well as panel regressions. The results from this empirical investigation indicate that major current account reversals have tended to result in large declines in GDP growth. I also analyze the large U.S. current account adjustment of 1987-1991.

Previous studies of employee investment in retirement plans suggest that people typically "naively diversify" their investment funds, tending to allocate 1/n of the total to each of n available instruments (Benartzi & Thaler, 2001). In this paper we provide experimental evidence that this bias extends to allocation among simple chance prospects and demonstrate three new violations of rational choice theory implied by use of the naive diversification strategy. Study 1 demonstrates "partition dependence" in which participants' allocations among a fixed set of investments vary with the hierarchical structure of the option set (e.g., by vendor and instrument). Study 2 demonstrates "unit dependence" in which participants' preferred allocations vary with the metric in which the investment is reported (dollars versus number of shares). Study 3 demonstrates "procedure dependence" in which the bias toward even allocation disappears if participants are asked to choose from a menu of possible portfolios. We show that these results extend to sophisticated participants, simple well-specified gambles and incentivecompatible payoffs. We close with a discussion of theoretical and prescriptive implications.

Using a unique sample of commercial loans and mergers between large banks, we provide micro-level (within-county) evidence linking credit conditions to economic development and find a spillover effect on crime. Neighborhoods that experienced more bank mergers are subjected to higher interest rates, diminished local construction, lower prices, an influx of poorer households, and higher property crime in subsequent years. The elasticity of property crime with respect to merger-induced banking concentration is 0.18. We show that these results are not likely due to reverse causation, and confirm the central findings using state branching deregulation to instrument for bank competition.

We develop a model of a firm owned by shareholders and administered by managers who may be either honest or dishonest. When managers have an informational advantage but shareholders retain control, dishonest managers can make false reports that distort investment and thereby reduce firm cash flows. When dishonest managers have privileged access to both information and control, firm value is further reduced and profits are diminished especially in the worst states of the world. Ineffective corporate governance combined with corruption (dishonesty) thus increases firms' exposure to systematic risk. In a cross-country empirical test of the model, we find that corruption substantially increases firm betas, particularly in countries with weak shareholder rights. Moving from the level of corruption in Canada to that in South Korea raises industry-adjusted betas by 0.35.

We examine the impact of asset liquidation value on debt contracting using a unique set of commercial property loan contracts. We employ commercial zoning regulation to capture the flexibility of a property's permitted uses as a measure of an asset's redeployability or value in its next best use. Within a census tract, more redeployable assets receive larger loans with longer maturities and durations, lower interest rates, and fewer creditors, controlling for the property's current value, its type, and neighborhood. These results are consistent with incomplete contracting and transaction cost theories of liquidation value and financial structure.

This study analyzes the automobile purchase behavior of all residents of two Finnish provinces over several years. It finds that a consumer's purchases are strongly influenced by the purchases of his neighbors, particularly purchases in the recent past and by neighbors who are geographically most proximate. There is little evidence that emotional biases, like envy or an urge to conform, lie behind the interpersonal influence in automobile consumption. The most reasonable alternative explanation for these findings is some form of information sharing among neighbors.

This article demonstrates how the critical components of customer equity, viz. acquisition and retention rates, may be derived from readily available sales transactions data. Thus we can develop marketing-mix models that investigate the customer equity implications of product-marketing decisions. An application in the luxury-automobile category reveals that sales effectiveness and customer equity effectiveness may be quite different, and that marketing actions that are sales effective may have an adverse impact on a brand's customer equity.

The question of long-run market response lies at the heart of any marketing strategy that tries to create a sustainable competitive advantage for the firm or brand. A key challenge, however, is that only short-run results of marketing actions are readily observable. Persistence modeling addresses the problem of long-run market-response quantification by combining into one measure of "net long-run impact" the chain reaction of consumer response, firm feedback and competitor response that emerges following the initial marketing action. In this paper, we (i) summarize recent marketing-strategic insights that have been accumulated through various persistence modeling applications, (ii) provide an introduction to some of the most frequently used persistence modeling techniques, and (iii) identify some other strategic research questions where persistence modeling may prove to be particularly valuable.

We consider the implications of a regime change regarding the timing of reports of trades made by corporate insiders. Presently, insiders report their trades after the fact; however, recently there have been renewed calls for insiders to pre-announce their trades to curb insiders' ability to profit from their information advantage. Pre-announcement by insiders removes noise trades as source of disguise, implying that insiders can no longer realize such profit. Of course, insiders also trade for other unobservable motives. This too is a source of disguise, but a dysfunctional one because it serves as an incentive to distort trades from those that would be optimal otherwise. As a consequence, insiders may be predisposed to favor accounting standards that expand public disclosures pertaining to firm value. A mitigating factor is the price risk caused by disclosures made in advance of trade. These phenomena are present even without pre-announcement and become more prominent as markets become thinner.

Production externalities pose interesting issues regarding task assignments in multi-task, multi-agent settings. The rationale for past practice of assigning agents similar tasks was the view that there are gains to repetition. However, there has been a trend in recent times toward assigning agents to diverse tasks in order to exploit complementarities. Illustrations of this trend include combining sales and service with the same person rather than assigning these duties to different individuals, organizing credit review and loan pricing by case rather than by stage in the credit granting process, bundling production and testing operations rather than separating those tasks, and having engineers contributing to the design of several different product components rather than having them specialize in particular components. Although the prevailing view seems to favor diverse task assignments when it comes to production externalities, we remain agnostic on whether externalities for either task assignment choice are beneficial or dysfunctional.

Business information services are intermediaries that collect, collate, package and distribute information of value to professional users. We consider two technologies that such intermediaries may use for delivering information. First, a packaged design that uses physical media like CD-ROMs to distribute information. Second, an online service that delivers such information via the Internet or other online networks. We model a market where subscribers may choose between "self-service," where they collect and collate information directly from sources, and a third party service provider who provides either a packaged design or an online service. Subscribers are indexed by their volume of usage for the service. In a duopoly, we show that providers with online or package technologies will serve different market segments. The package provider's limited ability to provide current information, combined with decreasing search costs in an online service will make a package provider increasingly vulnerable to being driven out of the market by the online provider.

The "resource-based view of the firm" has become an important conceptual framework in strategic management but has been widely criticized for lack of an empirical base. To address this deficit, we utilize a new method for identifying inter-firm differences in efficiency within the context of stochastic frontier production functions. Using data on Japanese and U.S. automobile manufacturers, we develop measures of resources and capabilities and test for linkages with firm performance. The results show the influence of manufacturing proficiency and scale economies at the firm and plant level. We apply the parameter estimates to account for Toyota's superior efficiency relative to other producers.

We forecast analyst earnings forecast errors out of sample. Forecasting variables include three "under-reaction" variables (earnings surprise, stock returns and analyst earnings forecast revisions) and seven "overreaction" variables (book to price ratio, forward earnings to price ratio, analyst long term growth forecast, sales growth, investments in property, plant and equipment, investments in other long-lived assets, and the accrual component of earnings). Estimation procedures include the traditional OLS as well as a more robust procedure that minimizes the sum of absolute errors (LAD). While in-sample we find significant prediction power using both OLS and LAD, we find far stronger results using LAD out of sample, with an average reduction in forecast errors of over thirty percent measured by the mean squared error or near ten percent measured by the mean absolute error. Most of the prediction power comes from firms whose analysts are too optimistic. The stock market seems to understand the inefficiencies in analyst forecasts: a trading strategy based on the predicted analyst forecast errors does not generate abnormal profits. Conversely, analysts seem to fail to understand the inefficiencies present in the stock prices: a trading strategy directly based on the predicted stock returns generates significant abnormal returns, and the abnormal returns are associated with predictable analyst forecast errors. The combined evidence suggests that the stock market is more efficient than financial analysts in processing public information.

We study an important recent series of buyback auctions conducted by the U.S. Treasury in retiring $67.5 billion of its debt. We find that the Treasury was successful in buying back large amounts of illiquid debt while suffering only a small volatility-related market-impact cost. Although the Treasury had the option to "cherry pick" from among the bonds offered, we find that the Treasury was actually penalized for being "spread too thin" by including multiple bonds in a buyback auction. We find evidence that the Treasury may have attempted to minimize its interest expense rather than its buyback costs in these auctions. There is no evidence, however, that the Treasury used its "timing" option to exploit auction participants.

We solve a model with two "Lucas trees." Each tree has i.i.d. dividend growth. The investor has log utility and consumes the sum of the two trees' dividends. This model produces interesting asset-pricing dynamics, despite its simple ingredients. Investors want to rebalance their portfolios after any change in value. Since the size of the trees is fixed, however, prices must adjust to offset this desire. As a result, expected returns, excess returns, and return volatility all vary through time. Returns display serial correlation, and are predictable from price-dividend ratios in the time series and in the cross section. Return volatility can be greater than the volatility of cash flows, giving the appearance of "excess volatility." Returns can be cross-correlated even when the cash flows are independent, giving the appearance of "contagion" or "spurious comovement."

School districts have made several attempts at decentralizing. However, decentralization in school districts can mean so many different things that the term has nearly lost its meaning. This paper reports a study of three large urban school districts that, over almost thirty years, adopted nearly identical approaches to decentralizing, granting control to principals and expanding freedom of choice for families. In all three cases, the goal of improving student achievement was achieved, though with a very small sample. These three districts are compared to the three largest public districts in North America. The comparisons reveal that the three decentralized districts attained a high level of principal control over school budgets, staffing, schedule, and teaching methods.

This paper examines the longer-term performance of IPOs from the expiration of the stabilization period to three years after the IPO. It seeks to determine whether employee stock options (ESOs) affect long-term market-adjusted performance after controlling for other influential factors previously uncovered in the IPO and compensation literature, such as executive cash compensation, profitability, age, size, venture capital backing, underwriter ranking, industry, book-to-market, analyst growth forecasts, and underpricing. We find that new public companies granting extensive ESOs outperform companies granting few ESOs for two years after the IPO. In addition, among IPOs with extensive ESOs, IPOs with high executive equity ownership outperform IPOs with low executive equity ownership. We offer a theoretical explanation for these results based on managerial risk aversion and compensation arrangements.

We examine the stock market reaction to inter-corporate (984 contractors and 575 contractees) and corporate-government (1963 contractors) contract announcements reported by Dow Jones between January 1, 1990 and December 31, 2000. Around contract announcement dates, we find statistically significant positive average abnormal return for contractors, but insignificant average abnormal returns for contractees. Cross-sectionally, contract announcement period returns are higher for contractors who win larger and longer-term contracts, are relatively small, previously had slower growth but higher profitability, have many competitors, and are in riskier lines of business. Contract awarding firms have higher announcement period returns when their grants are relatively short term and when they previously had higher growth and lower profitability. The results for contract-winning firms are consistent with two explanatory stories: winning a contract reveals that the firm is a low cost producer and might also earn quasi-rents induced by the winner's curse influence on contract bids.

The capacity of an asset market to accommodate order imbalances, a measure of market efficiency, is inversely related to the predictability of returns from previous order flows. For a comprehensive sample of NYSE stocks that traded every day from 1993 through 2002, we find that very short-term (five-minute) return predictability is strong on average but has undergone a secular decline interspersed with sizable fluctuations. Liquidity facilitates efficiency, in the sense that the market's capacity to accommodate order flow is larger during periods when the market is more liquid. We also examine two interday measures of informational efficiency, variance ratios of open-to-close/close-to-open returns and first order daily autocorrelations, across the three tick size regimes, which correspond to successive increases in liquidity. Variance ratios increased but autocorrelations declined (particularly for smaller firms) as the minimum tick size decreased, which suggests that volatility induced by private information during trading hours increased along with liquidity.

Deviations from no-arbitrage relations should be related to frictions associated with transacting; in particular to market illiquidity, because frictions impede arbitrage. Thus, financial market liquidity may play a key role in moving prices to fair values. At the same time, a wide futures/cash basis may trigger arbitrage trades and thereby affect liquidity. We test these ideas by studying the joint dynamic structure of aggregate NYSE market liquidity and the NYSE Composite index futures basis for a relatively long time-period, over 3000 trading days. We find that liquidity and the basis forecast each other in addition to being contemporaneously correlated. There is evidence of two-way Granger causality between the short-term absolute basis and effective spreads, and quoted and effective spreads Granger-cause longer-term absolute bases. These results are preserved after including a proxy for arbitrage financing costs, the Federal Funds rate, which bears an independent positive and significant relation with the short-term absolute basis. Impulse response functions indicate that shocks to the absolute basis predict future stock market liquidity. Overall, the evidence suggests that stock market liquidity enhances the efficiency of the futures/cash pricing system.

We investigate the risk and return of a wide variety of trading strategies involving options on the S&P 500. We consider naked and covered positions, straddles, strangles, and calendar spreads, with different maturities and levels of moneyness. Overall, we find that strategies involving short positions in options generally compensate the investor with very high Sharpe ratios, which are statistically significant even after taking into account the non-normal distribution of returns. Furthermore, we find that the strategies' returns are substantially higher than warranted by asset pricing models. We also find that the returns of the strategies could only be justified by jump risk if the probability of market crashes were implausibly higher than it has been historically. We conclude that the returns of option strategies constitute a very good deal. However, exploiting this good deal is extremely diffcult. We find that trading costs and margin requirements severely condition the implementation of option strategies. Margin calls force investors out of a trade precisely when it is losing money. Taking margin calls into account turns the Sharpe ratio of some of the best strategies negative.

We propose a novel approach to optimizing portfolios with large numbers of assets. We model directly the portfolio weight in each asset as a function of the asset's characteristics. The coefficients of this function are found by optimizing the investor's average utility of the portfolio's return over the sample period. Our approach is computationally simple, easily modified and extended, produces sensible portfolio weights, and offers robust performance in and out of sample. In contrast, the traditional approach of first modeling the joint distribution of returns and then solving for the corresponding optimal portfolio weights is not only diffcult to implement for a large number of assets but also yields notoriously noisy and unstable results. Our approach also provides a new test of the portfolio choice implications of equilibrium asset pricing models. We present an empirical implementation for the universe of all stocks in the CRSP-Compustat dataset, exploiting the size, value, and momentum anomalies.

We use a novel pricing model to filter times series of diffusive volatility and jump intensity from S&P 500 index options. These two measures capture the ex-ante risk assessed by investors. We find that both components of risk vary substantially over time, are quite persistent, and correlate with each other and with the stock index. Using a simple general equilibrium model with a representative investor, we translate the filtered measures of ex-ante risk into an ex-ante risk premium. We find that the average premium that compensates the investor for the risks implicit in option prices, 10.1 percent, is about twice the premium required to compensate the same investor for the realized volatility, 5.8 percent. Moreover, the ex-ante equity premium that we uncover is highly volatile, with values between 2 and 32 percent. The component of the premium that corresponds to the jump risk varies between 0 and 12 percent.

I analyze how disclosure policies and managerial attributes interact to influence stock prices, firm values, and the liquidity of financial markets. I adopt the reasonable premises that high cognitive ability assists in value-creation within private corporations, and psychic costs in the sense of Becker (1976) limit (but do not eliminate) the extent of misrepresentation in disclosures. Managerial cognitive ability, the attribute which facilitates the successful functioning of firms, also enhances the odds of success of strategies which mislead large numbers of agents who observe the firm's disclosure statements. Further, agents with greater cognitive ability have higher reservation wages which increases their incentives to overstate firm value. These features cause the equilibrium degree of misrepresentation in disclosures to increase with managerial cognitive capacity (or intellect). Equilibrium efforts at improving true expected values of firms are limited by expected gains from misrepresentation. In a setting where stock prices influence real investment, I show that such confounding disclosures may actually improve ex post firm values. I then argue that agents may have inadequate incentives to acquire information in firms run by managers who are effective at misrepresenting their firms in disclosure statements. This indicates that contrary to findings in the extant theoretical literature, there may be a positive relation between liquidity and the degree of information asymmetry between management and outside investors.

This paper explores liquidity spillovers in market-capitalization based portfolios of NYSE stocks. Return, volatility, and liquidity dynamics across the small and large cap sector are modeled by way of a vector autoregression model, using data that spans more than 3000 trading days. We find that volatility and liquidity innovations in either sector are informative in predicting liquidity shifts in the other. Impulse responses indicate the existence of persistent liquidity, return, and volatility spillovers across the large and small cap sectors. Lead and lag patterns across small and large cap stocks are stronger when spreads in the large cap sector are wider. Consistent with the notion that private informational trading in large cap stocks is transmitted to other stocks with a lag, order flows in large cap stocks decile significantly predict both transaction price-based and mid-quote returns of small cap deciles when large-cap spreads are high.

We provide a model with overconfident risk neutral investors in which (i) low book-to-market firms have high betas but on average earn low returns; (ii) a factor-mimicking portfolio such as HML earns positive expected returns; (iii) such a portfolio loads on fundamental macroeconomic variables; (iv) the loadings of securities on such portfolios positively forecast cross-sectional future returns; and (v) loadings on such a portfolio have incremental power to predict returns after controlling for characteristics. Thus, an empirical finding that covariances beat characteristics does not provide any greater support for a rational risk theory over a behavioral setting with no risk premia. The analysis also reconciles the high risk (market betas) of low book-to-market firms with their low expected returns, and offers new empirical implications that distinguish behavioral and rational pricing approaches.

How do the research agendas of the Information Systems field actually get decided? Here we consider the implications of the field's embeddedness within its larger institutional milieu. We argue that the field's research directions can be understood as responses to institutionally constituted market forces that arise both within academia and in the larger economy and society. Further, the academic discourse associated with any particular research stream is shaped by the workings of these forces, in ways we have yet to fully understand. We make four proposals for reflexive-type inquiries that might advance this understanding.

The authors argue that Christian and Hindu cultural traditions give rise to different conceptions of supernatural forces and, hence, to differences in everyday fatalistic thinking. Differences between the Christian deity-centered and Hindu destiny-centered worldviews were predicted in fatalistic attributions for misfortune and strategies for coping with risk. Attributing others' misfortunes to supernatural causes should depend on prior information about the others' misdeeds for Christians but not for Hindus. Regarding strategies for coping with future risk, Christians should prefer petitionary prayer whereas Hindus should prefer divination. Evidence for these hypotheses was found in cross-national comparisons of matched groups (Study 1), and comparisons between religious groups in the same US city (Studies 2, 3). Study 4 found that fatalistic thinking of bicultural individuals could be shifted when addressed in a Hindi versus a standard American accent.

We investigated two types of metaphors in stock market commentary. Agent metaphors describe price trends as volitional actions, whereas object metaphors describe them as movements of inanimate objects. Study 1 examined the consequences of metaphoric description for the investor audience. Agent metaphors, compared with object metaphors and nonmetaphoric descriptions, caused investors to expect price trend continuance. The remaining studies examined preconditions, the features of a trend that evoke agent versus object metaphors. We hypothesized that the rate of agentic metaphors would depend on the trend direction (upday vs. downday) and steadiness (steady vs. unsteady). Two archival studies tracked the metaphoric content in end-of-day CNBC commentary as a function of daily price trajectories. As predicted, agent metaphors were more likely for uptrends than downtrends and especially so when the trends were relatively steady. This held for both bull (Study 2) and bear market periods (Study 3). Study 4 replicated these findings in a laboratory study where participants took the role of a stock market commentator.