More from these Authors

In 2014 Alibaba debuted on the New York Stock exchange, creating not only the largest IPO in history but this initial desire to list on the Hong Kong Stock Exchange was denied due to the company's desire to preserve its partner's control over decision rights. Why did Hong Kong deny Alibaba's requests to list dual-class shares or to allow its partners to nominate a majority of the board of directors, and in the process turn away a superstar in Alibaba? Why did American stock markets approve of Alibaba's governance structures, despite the warnings of many governance experts? How can investors ensure that their capital would be deployed effectively by the company's top management?

We compare the performance of a comprehensive set of alternative peer identification schemes used in economic benchmarking. Our results show the peer firms identified from aggregation of informed agents' revealed choices in Lee, Ma, and Wang (2014) perform best, followed by peers with the highest overlap in analyst coverage, in explaining cross-sectional variations in base firms' out-of-sample: (a) stock returns, (b) valuation multiples, (c) growth rates, (d) R&D expenditures, (e) leverage, and (f) profitability ratios. Conversely, peers firms identified by Google and Yahoo Finance, as well as product market competitors gleaned from 10-K disclosures, turned in consistently worse performances. We contextualize these results in a simple model that predicts when information aggregation across heterogeneously informed individuals is likely to lead to improvements in dealing with the problem of economic benchmarking.

We develop and implement a rigorous analytical framework for empirically evaluating the relative performance of firm-level expected-return proxies (ERPs). We show that superior proxies should closely track true expected returns both cross-sectionally and over time (that is, the proxies should exhibit lower measurement-error variances). We then compare five classes of ERPs nominated in recent studies to demonstrate how researchers can easily implement our two-dimensional evaluative framework. Our empirical analyses document a tradeoff between time-series and cross-sectional ERP performance, indicating the optimal choice of proxy may vary across research settings. Our results illustrate how researchers can use our framework to critically evaluate and compare a growing body of ERPs.