The premise of the paper is that some researchers screw up when concluding they've identified risk factors for injury. In the paper, the authors describe two of the most common screwups and how to address them.Before I get to the screwups, a quick explanation is in order regarding the correct way to identify risk factors for injury, which is through a prospective study. It can be summed up in three steps:

Step 1. Examine a group of uninjured athletes at baseline (i.e. in the pre-season).

Step 2. Track their injuries prospectively (i.e. over a period of time, usually a competitive season).

Step 3. At the end of the study, break the sample into two groups -- the athletes who got injured and the athletes who didn’t -- and look for differences between the groups in their baseline measures.

And now for the common screwups, as per Clifton et al.:

Common Screwup #1) Retrospective Study Design

Prospective injury studies are difficult to do because they require careful and consistent follow-up regarding injury. A much easier approach is a “retrospective” one. With this type of study, you simply assess a group of athletes with and without injuries and compare the groups based on these previous injuries. With this design, there’s no need to follow the athletes over time.

The mistake that’s commonly made comes with the interpretation of the retrospective study. Researchers will often state that measures on which the injured athletes performed worse are risk factors for injury. The trouble is, there’s no way to know whether those factors were present prior to the injury OR if they are actually the result of the injury.

This isn’t to say retrospective research is worthless. It’s just that follow-up studies with prospective designs are needed to determine whether the differences seen in retrospect are true risk factors for injury prospectively.