I’m not trying to sound arrogant or scientifically elitist here – I’m merely stating that it was my opinion that most citizen-science endeavours fail to provide truly novel, useful and rigorous data for scientific hypothesis testing. Well, I must admit that I still believe that ‘most’ citizen-science data meet that description (although there are exceptions – see here for an example), but Tomas’ success showed me just how good they can be.

So what’s the problem with citizen science? Nothing, in principle; in fact, it’s a great idea. Convince keen amateur naturalists over a wide area to observe (as objectively) as possible some ecological phenomenon or function, record the data, and submit it to a scientist to test some brilliant hypothesis. If it works, chances are the data are of much broader coverage and more intensively sampled than could ever be done (or afforded) by a single scientific team alone. So why don’t we do this all the time?

If you’re a scientist, I don’t need to tell you how difficult it is to design a good experimental sampling regime, how even more difficult it is to ensure objectivity and precision when sampling, and the fastidiousness with which the data must be recorded and organised digitally for final analysis. And that’s just for trained scientists! Imagine an army of well-intentioned, but largely inexperienced samplers, you can quickly visualise how the errors might accumulate exponentially in a dataset so that it eventually becomes too unreliable for any real scientific application.

So for these reasons, I’ve been largely reluctant to engage with large-scale citizen-science endeavours. However, I’m proud to say that I have now published my first paper based entirely on citizen science data! Call me a hypocrite (or a slow learner). Read the rest of this entry »