Unblinking eyes: the ethics of automating surveillance

Abstract

In this paper I critique the ethical implications of automating CCTV surveillance. I consider three modes of CCTV with respect to automation: manual (or non-automated), fully automated, and partially automated. In each of these I examine concerns posed by processing capacity, prejudice towards and profiling of surveilled subjects, and false positives and false negatives. While it might seem as if fully automated surveillance is an improvement over the manual alternative in these areas, I demonstrate that this is not necessarily the case. In preference to the extremes I argue in favour of partial automation in which the system integrates a human CCTV operator with some level of automation. To assess the degree to which such a system should be automated I draw on the further issues of privacy and distance. Here I argue that the privacy of the surveilled subject can benefit from automation, while the distance between the surveilled subject and the CCTV operator introduced by automation can have both positive and negative effects. I conclude that in at least the majority of cases more automation is preferable to less within a partially automated system where this does not impinge on efficacy.

Norris, C. (2002). From personal to digital: CCTV, the panopticon, and the technological mediation of suspicion and social control. In D. Lyon (Ed.), Surveillance as social sorting (pp. 249–281). Oxford: Routledge.