Estimating population effect size accurately and precisely plays a vital role in achieving a desired level of statistical power as well as drawing correct conclusions from empirical results. While a number of common practices of effect-size estimation have been documented (e.g., relying on one’s experience and intuition, and conducting pilot studies), their relative advantages and disadvantages have been insufficiently investigated. To establish a practical guideline for researchers in this respect, this project compared the accuracy and precision of effect-size estimation, resulting power, and economic implications across pilot and non-pilot conditions. Furthermore, to model the potential advantages of pilot studies in finding and correcting flaws before main studies are run, varying amounts of random error variance and varying degrees of success at its removal – often neglected aspects in simulation studies – were introduced in Experiment 2.
The main findings include the following. First, pilot studies with up to 30 subjects were utterly ineffective in achieving the desired power of 0.80 at a small population effect size even under the best-case scenario. At this effect size intuitive estimation without pilot studies appears to be the preferred method of achieving the desired power. Second, the pilot studies performed better at medium and large population effect sizes, achieving comparable or even greater power to that in the non-pilot condition. The relative advantages of pilot studies were particularly evident when moderate to large error variances were present, and a portion of it had been removed through conducting pilot studies. These broad findings are discussed in the context of flexible design: study design can be modified flexibly in accordance with the researcher’s particular goals.