Is SK seriously trying to cover his ass by claiming that the Accounting Review copy editor messed up his paper? I read the differences between the two versions, and there is no way that any copy editor would make those kinds of changes. SK and/or AB had to have made the changes themselves.

The response is that there was a printing mistake at TAR. That is very plausible and happens sometimes.

Stephen, is that you? This wasn't no printing mistake.

Old

First, because the Russell 1000 and 2000 indexes are explicitly determined by market capitalization rank as of the last trading day in May each year, the index memberships can be constructed using CRSP data. Second, Russell provides data for academic use. Because Crane et al. [2014] argue that predicting index inclusion for a fixed sample size can induce a bias from index misclassifications, we use the latter approach in our main results.

Published

First, because the Russell 1000 and 2000 indexes are explicitly determined by market capitalization rank as of the last trading day in May each year, the index memberships can be constructed using CRSP data; we use these rankings in our main specifications. Second, Russell provides index membership data for academic use, and we use these in robustness checks of our main findings.

Copy editors don't make these kinds of changes. You made them. You lied.

Sounds to me as simple as one of the authors thought they were eliminating an error in description but instead introduced one? Wouldn't Bayesian reasoning lead one to believe error rather than malfeasance since errors (in general in academia) are so abundant?

Not my area, but it seems to me the more substantive, interesting argument of Young's comment is that the Russel provided June rankings aren't appropriate for knowing who was at the discontinuity at May?

Sounds to me as simple as one of the authors thought they were eliminating an error in description but instead introduced one? Wouldn't Bayesian reasoning lead one to believe error rather than malfeasance since errors (in general in academia) are so abundant?
Not my area, but it seems to me the more substantive, interesting argument of Young's comment is that the Russel provided June rankings aren't appropriate for knowing who was at the discontinuity at May?

That's why AB and SK's response is so weak. They don't ever say that they were the ones who made the changes. Instead, it sounds like they're blaming the Accounting Review's editing staff for making these changes. They also don't address that Young showed that their results are totally garbage and spurious.

Not sure if people know this, but lying about you obtained your results is considered data falsification.

Such an impressive LRM. I get flashback to the AER scandal. Part of the motivation of these LRMS to go out again the HRMs is that they are first dissed by the HRM. Lesson: treat LRMs respectfully because they can damage your career a lot.

I forgot that the replication paper Stephen Karolyi published in RFS also uses this Russell methodology. I was curious as to whether that paper also has "printing mistakes" compared to older working paper versions.

Because the Russell 1000 and 2000 indexes are explicitly determined by market capitalization rank as of the last trading day in May each year, the index memberships can be constructed using CRSP; we use these observed market capitalization rankings in our main results. We also obtain data from Russell on their end of June index ranks, which we use in robustness tests in Table 10.

In the results presented here we follow Crane et al. [2014] and and impose a k-order polynomial control function on the market capitalization ranks provided and used by Russell, but, in untabulated results we confirm that our main findings hold if we alternatively impose a k-order polynomial control function on the market capitalization ranks that we calculate using May 31 closing prices from CRSP as in Appel et al. [2015].

Does RFS need to issue a correction due to a "printing mistake" too, or can we stop the BS and admit that Stephen Karolyi has lied in two papers?

Such an impressive LRM. I get flashback to the AER scandal. Part of the motivation of these LRMS to go out again the HRMs is that they are first dissed by the HRM. Lesson: treat LRMs respectfully because they can damage your career a lot.

I looked at this carefully at some point when I was refereeing one of the finance papers that eventually got published. won't say which one or journal.

-WM is good, and also may have been the first Russell RDD paper. He is the only one who got the true non-float-adjusted rankings out of Russell. However, he only did get data for 2 or 3 years or something, so he had a small N which might explain his weird findings.

-Chang et al RFS and Schmidt Falenbrakh JFE are both pretty good.

-Nobody mentions Boone White JFE. That's because it's a punchline.

-Crane et al RFS is becoming a punchline and rightly so.

-Appel et al has being the most influential. Lots of papers using that approach. But as the second part of this comment by Young, clearly shows, it is wrong, with a big discontinuity across the threshold, which drives all estimates. This could be why it's so influential...

Final thought: The paper I reviewed, I recommended rejection mostly on the basis the methodology was critically flawed. Less than a year later it was forthcoming at a different A finance journal, no change in the methodology.

Sincerely, not Alex Young or Wei Wei (have never met them). (also, have never written a Russell paper myself).