Crowdsourcing and CCTV: the Effect of Interface, Financial Bonus and Video Type

Downloads

Abstract

Crowdsourcing has been widely leveraged for the tagging of video material; recently this has included the monitoring of surveillance video footage. However, the relative advantages and disadvantages of crowdsourcing watchers of surveillance video are not well articulated, nor has there been any significant work on the efficacy of different tools for retrospective surveillance. In this paper we explore factors that might affect crowd performance on video surveillance monitoring tasks. We firstly established a baseline for crowd performance in a 'live' surveillance study before comparing two different interfaces that allowed retrospective discovery of the same events (Scrub Player and Panopticon). Using MTurk, we asked 474 people to monitor two types of CCTV footage containing different levels of extraneous activity for 'events' using these two interfaces. We also manipulated bonus payments. We found that the crowd was most accurate when watching the video that contained the least extraneous activity, that there was no effect of financial incentive on the reporting of true events, but that there was some indication that higher bonus payments generated more false alarms. We also found an effect of interface: those using the Panopticon interface processed more video footage and generated more alerts, but were less accurate overall. We discuss the implications of different interface designs, video types, and incentive schemes when crowdsourcing watchers of surveillance video.