Abstract

Since the 1970s, texts on research methods in animal behavior advocate that researchers minimize potential observer bias in their studies. One way to minimize possible bias is to record or score behavioral data blind to treatment, group, or individual. Another way to reduce bias is for researchers to analyze subsets or entire sets of data independently of one another and to obtain high inter-observer reliability of behavioral coding. We reviewed several hundred published articles from 1970, 1980, 1990, 2000, and 2010 in five leading animal behavior journals and found that these two methods for minimizing or eliminating bias were rarely reported (<10% of articles reviewed). In contrast, a journal focusing on human infant behavior research was far more rigorous in incorporating methods to avoid bias (>80% of articles reviewed). The lack of reporting attempts to minimize bias in animal behavior studies suggests that, at best, many researchers view blind analyses of data or inter-rater reliability as unimportant components of research or, if carried out, unnecessary to report in a manuscript. At worst, the lack of reporting attempts to minimize bias suggests that some published behavioral research may be unreliable. We are aware of constraints imposed by fieldwork and data collecting issues that make blind data comparisons or inter-rater reliability assessments sometimes difficult or unfeasible. However, given that research ethicists often emphasize the fundamental importance of trust and transparency in science, we urge authors, reviewers, and editors of manuscripts to ensure that at least one of these two methods of reducing and reporting observer bias occurs.