There is a certain enthusiasm in any community when
encountering a new tool—a shiny new instrument that
appears to solve hard problems—that leads to a barrage of
research “results.” However, by rewarding quick
demonstrations of the tool’s use, we fail to attain a deeper
understanding of the problems to which it is applied, to
pursue real solutions rather than stopgaps, or to develop a
real understanding of the tool’s limits. In this paper I argue
that we are currently experiencing these failures in our
focus within crowdsourcing (both crowdsourced science
and the science of crowdsourcing) but that there are still
some interesting research trajectories available to us. They
just might require significant work and produce the most
dreaded of research outcomes: negative results.

PDF (172Kb), to appear at the CHI2011 Workshop on Crowdsourcing and Human Computation