Request for Volatility and Higher Moments Replication Proposals

Following the success of its first such issue, the Critical Finance Review is planning to regularly publish issues dedicated to replicating the most influential empirical papers in financial economics. It is explicitly not the goal of these replication issues either to prove or to disprove the papers. The replications are meant to be as objective as possible. The CFR wants to reduce the incentives of authors to slant the results either favorably or unfavorably. The contract between an invited replicating team and the CFR is that the journal expects to publish the replicating paper even (or especially if) all the findings of the original paper hold perfectly. All papers selected for the replication issues are still refereed to ensure that they fulfill their objectives and are carefully and correctly executed. The papers are also not mechanical but are expected to contain novel and creative aspects, too.

Papers for these replication issues are not selected because the editors have a prior on whether they are replicable. They are selected because they are influential flagship papers in the area. Paper selection should be viewed as professional recognition.

Mandatory Paper Outline

The format of replication studies should be in roughly equal parts (and in this order):

Pure replication from the original underlying data. Exact replication is a sine-qua-non.

Replication should be an attempt to use the same sample and methods employed in the original paper to obtain the same figures as those in the key tables reported in the original paper. [This part of the paper exists not only to confirm that there were no coding errors in the original paper, but to keep the starting point as close to the original paper as possible. The authors of the replicating paper are also required to publish their replication source code and some data (ideally full data sets).]

We do not expect replication problems. However, it may be the case that the replication may not succeed for no or little fault of the original authors. For example:

The original data (even CRSP and Compustat) may have been corrected or updated over the years.

The original paper may not have spelled out all details. (Doing so is nearly imspossible in an academic paper.) The replicating paper should also clarify method and procedures that are not immediately clear from the original papers.

If this is the case, we, as a profession, still want to learn this. The point is not to blame original authors—the point is to learn.

Important: The idea is not to replicate all the values in all the tables—just the key ones. Replicating authors who want to replicate more or all results (i.e., more than what is suitable for print) can do so, but such results will go into an online appendix. Again, the idea of part 1 is to confirm the exact numbers, and distribute computer program source that makes this possible.

Out-of-sample tests—performance since publication. Because we usually publish replications of papers 10-20 years old, we now can learn (a) whether the effects have become weaker; (b) whether the results continued to hold out of sample; (c) if they did not, whether they were so opposite that the full sample inference by now has changed (e.g., from statistically significant to insignificant).

Plain specification robustness tests. This could add new tests: winsorizing, alternative weighting schemes, alternative timing, common additional controls, different standard error assumptions, and/or a placebo. Ideally, the replicating paper should also show or at least discuss when such tests support (or challenge) the original paper's conclusions.

Novel and creative aspects, such as additional higher-level tests and discussions. This could include interpretations of issues such as (corrections of) endogeneity, even if this could arguably be considered an omitted variables issue. It could be about time-series rather than cross-sectional (or vice-versa) association (e.g., fixed-effects), which can be different in meaning and interpretation but interesting from the perspective of the hypothesis. IUt could also contain interpretation of the findings through a different lens than that proposed in the original paper. This aspect of the paper could be a good venue for the replicating authors to publish original thoughts and discussion points that would otherwise not be easy to communicate to the broader profession.

Journal-Author Contract

Again, the CFR emphasizes that it is committed to publishing replication papers that conclude that the original paper was perfect.

If a team cannot replicate the original paper, then the team is asked to coordinate with the editor for communication with the original author(s). We want to minimize the imposition on the original authors. In any case, we hope that replication will not be painful. After all, these are classic original papers, which have set out the recipe for reseach in the area!

The CFR does reserve the right to ask teams to remove outright incorrect tests and execution, but will give replicating authors extensive latitude in deciding on good tests. For example, the editor and referee may feel that value-weighting removes too much interesting observations, but if the replicating authors insist that it is important and interesting, it will likely survive to the published paper.

Regardless of outcome, the original authors will be invited to provide non-anonymous feedback on the first submission of the papers and to publish their own perspectives on the replicating papers. They get the last word. Disagreements are welcome—insinuations are not.

As to the key incentive that makes participation for replicating teams worth their while, please recall that the replication paper—unlike others—will be published. It is not the usual wild goose chase—the desperate search for astonishing findings. Moreover, we know from the psychology replication issue that previous replication papers were very influential. They are beginning to transform their discipline. We hope we can do this for finance and accounting, too. Be part of it!

ISSUE on Volatility and Higher Moments

The second issue in this replication series will be dedicated to volatility and higher moments. Juhani Linnainmaa has graciously agreed to serve as the editors for this issue.

The empirical papers for this issue were selected based largely (but not exclusively) on objective citation counts. They are:

The CFR is hereby soliciting proposals for replication for each of these papers.

Each of these papers has generated a large body of follow-up and closely related work. Your paper does not have to, and should not, exist in isolation of this follow-up work. The following studies, for example, supplement the papers listed in each of the categories above:

Ang, Andrew, Robert J. Hodrick, Yuhang Xing, and Xiaoyan Zhang, 2009, High idiosyncratic volatility and low returns: International and further U.S. evidence, Journal of Financial Economics 91(1), 1-23.

Per point 4 of the Mandatory Paper Outline, you may want to try to place the paper into broader context. Although replication and robustness tests are important and valuable, it is also interesting to look around the edges of the original paper. In the end, the goal is to educate the profession about what the data seem to be telling us. Given both the original paper, and everything we have learned since then, how should we think about the original paper and its results and conclusions?

Timeline

The intended timeline is

Submission of Interest

in 3-6 months

(application selection)

Confirmed replication

in 6-12 months

(first part: source code, data sets)

First submission

in 12-18 months

Review process

3-4 months

Final submission

in 24 months

Original author responses

in 27 months

Issue creation

in 36 months

Publication

in 42 months

Team Objectivity

Members of the replication teams should be and strive to remain objective—if a third party could perceive a personal conflict-of-interest, either positive or negative, please indicate this in the proposal to the editor. The CFR's preferred goal is to select not only teams that are objective, but also viewed as objective. In case of doubt, please ask.

It is not a conflict of interest or lack of objectivity if authors have an opinion or hunch that the paper to-be-replicated is likely to hold up or not. In fact, some submitters may have already worked on replication earlier.

It is a conflict of interest if the replication and original authors have had a history of repeated disagreements.

It is a lack of objectivity if the replicating authors are intent on proving an outcome.

Contact

If you are interested in replicating one of the papers listed above, please contact the assigned issue editor, Juhani Linnainmaa (jlinnain@marshall.usc.edu) and cc the CFR Editor (Ivo Welch) with a description of the team members, the paper to be replicated, and any potential conflicts of interest. Teams have typically included one senior researcher, one junior researcher, and one advanced Ph.D. student. The lead author on a team must have an existing publication record.

The Generic Professional Appeal

Replication (and not just replicability) is vitally important for the profession. The CFR is not trying to debunk papers. It is trying to bring objectivity into and remove politics from the knowledge-building process.

The effort involved to replicate and examine a paper is much less than it is with an ordinary paper. There is a clear road map of what is required and a direct route to publication. It should require less effort than even an invited paper. The work can be done together with coauthors and/or phd students.

This will be the first time our profession will have ever tried to execute objective systematic replication. We need to lend some prestige to this first-time undertaking.

As for me, I would like everyone to consider helping on this task at least once in their lifetime, and to view it as a necessary service to our academic profession, similar to refereeing, and regardless of whether it makes friends or enemies. I am worried about what our academic enterprise means if even famous people prefer to free-ride and not help build an objective replicated knowledge base. If the famous don't care enough to do it, how can we ask others?

Note that it is not necessary that the replicating team is built around an expert in the subject, here liquidity. After all, this is a replicating outside perspective. It should need good financial empiricists, not experts.

This is not my problem. This is our problem. If we cannot get this done as a collection of hundreds of academic researchers, what meaning does our professional endeavor really have?

What does our profession need most? More published papers? More referee reports? More of "everyone knows this is false" insinuations (which, as editor of the CFR, I have heard too many times without empirical support)? Or do we need unbiased replication and confirmation/rejection of our most important base findings? Where do you think you can contribute the most to our science?