When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

Maximizing the rigor and reproducibility of citizen science in ecological research

PLOS ONE is organizing a Twitter chat on citizen science methodologies on 2nd April- see details below

Citizen science (CS) encompasses a broad range of research methodologies that involve public participation for data collection, transcription or analysis. Applications of CS have been found in many disciplines, but ecology has consistently been at the forefront. While some CS-based ecological monitoring schemes- such as the UK Butterfly Monitoring Scheme, established in 1976- have been running for decades, the popularity of CS has grown rapidly in more recent years. A wide range of projects based on CS methodologies are now being undertaken around the world, at local, national and international scales. The value of volunteer participation in activities ranging from transect-based species monitoring (Wepprich et al., 2019) and collection of biological specimens for lab-based analysis (Larson et al., 2020; Rasmussen et al., 2020) to crowdsourcing of creative thinking for study design (Can et al., 2017), has been repeatedly demonstrated. Studies have also highlighted the particular utility of CS methodologies in supporting long-term ecological monitoring in resource-limited contexts, including in economically developing countries (Gouraguine et al., 2019). Meanwhile, examples of the real-world impact of CS research are abundant, both in specific ecological interventions and the wider political discourse. For instance, the influential UK State of Nature 2019 report, likely to be a key source of evidence for future environmental legislation, cites the outcomes of a wealth of CS projects.

With the expansion of CS research, there is lively debate about how to maximise rigor and reproducibility in different types of CS methodologies. One of the crucial aspects of a successful CS study is an appropriately designed protocol, which features a realistic degree of complexity and accounts for the specific challenges of handling CS-derived data. An example of this is provided by a recent comprehensive report of the design, launch and assessment of the UK National Plant Monitoring Scheme (Pescott et al., 2019). Pre-testing of protocols prior to project launch can provide confidence in the robustness of the study design. When designing a CS study, it is also important to understand volunteer motivation and ensure that this is appropriately matched with the nature of the task to be performed (Lyons & Zhang, 2019). Some CS studies utilize narrower demographic groups to meet the required level of motivation and understanding, such as amateur naturalists (Hallmann et al., 2017) or students who are following a course in a related topic (Chiovitti et al., 2019). Depending on the type of study, researchers may also plan to support CS volunteers with training or technological aids, increasingly in the form of mobile apps (Ožana et al., 2019; Appenfeller et al., 2020).

A certain amount of error, either random or systematic, is likely to be introduced by the collection of data by CS volunteers, and study designs must account for this. The level of error can be reduced by allowing volunteers to provide clarifying metadata or to register uncertainty (Torre et al., 2019), or using incentives to reduce sampling bias (Callaghan et al., 2019), but researchers should also ensure that they have means to assess the accuracy of contributed data (Falk et al., 2019; Gibson et al., 2019). Much ecological research is based on large public databases of volunteer-contributed records of species distributions, phenological events and other observational data (e.g. Siljamo et al., 2020). There is an active discussion in the ecological research community about how to maximize the reliability and utility of such data (Ball-Damerow et al., 2019).

The particular considerations that have to be made in the design, execution and evaluation of CS studies has led to calls for dedicated standards and guidelines for CS research. Of course, any such tools must strike the balance between promoting appropriate levels of standardization and allowing the flexibility required for applications of CS methodologies across diverse settings and research questions. Whilst some progress has been made towards this goal, maintaining an open and constructive dialogue among CS practitioners and other stakeholders remains critical to ensure that researchers, volunteers and society are able to realize the full potential of CS.

To foster discussion of these important issues, PLOS ONE (@plosone) will be moderating a Twitter chat on citizen science methodologies on Thursday 2nd April starting at 4pm BST (8am PDT, 11am EDT, 5pm CET). This is a chance for the CS community to share perspectives, experiences and suggestions for best practice. We’ll aim to cover the following questions (and more!):

How far can methods in CS projects be standardized?

What steps should be taken to maximize CS data quality?

Is there a need for clearer guidelines for the design and execution of CS studies?

How should credit for data collection be apportioned?

You can take part by using the hashtag #citscichat– we hope to see you there!