Background: Several claims highlight that the quality of biomedical scientic research is far away from what is desire. Our goal is to evaluate the eect on journal impact of interventions, whose original aim was to improve paper quality. Two separate, previously randomized trials (PLOS and ET) on manuscripts submitted to Medicina Clinica showed a positive eect on paper quality after adding statistical reviewers and after recommending reporting guidelines (RG) during the editorial process. The objective of this work is to study their eects on additional outcomes in terms of their impact on further research. Methods: We maskedly collected from Web os Science the number of citations (NC) received from each paper; as well as the sum of the means of the citing journal's impact factor (SI). In the ET study, we ran simulations to select the best location and scale tests. Simulations suggested employing the Ordinal Logistic Regression (OLR) for location and scale tests, as well as the gatekeeping method for controlling the FWER. Simulations showed that the alpha was preserved, but the power didn't achieve the desired 80%. Although power did not recommend formal testing, as an academic exercise, the PLOS dataset was used to conrm the suggested hypotheses. Results: With the selected strategy, we couldn't prove any review intervention eect on impact. The point estimate of the shift towards a superior NC quartile was OR=2.17 (CI95% from 0.61 to 7.65, P=0.229). Discussion: We tried to test new hypotheses on previous unpowered datasets, but our results couldn't demonstrate them. Nevertheless, these data clearly suggest that adding RG during the review process has a positive eect on posterior science repercussions.