Follow Blog via Email

Robots on Opportunism and Hierarchy

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The first premise of Battlestar Galactica – that Cylons rebel against their creators – is the same concern that drives science fiction from the start, and perhaps its high point is Asimov and the three laws of robotics. Think of Roy in Bladerunner, more human that Deckard by some distance. Think of the character Bishop in Aliens, played by Lance Henrikson; or of Call in Alien Resurrection, played by Winona Ryder. Think of the Borg (go team). Why do such beings get such a hard time? Is it because we worry that if artificial intelligence can exceed human thought we are doomed as obsolete and redundant, etc. I suspect something more sinister is really behind this fear of manchines. Isn’t it a worry that there might be something about knowledge (intelligence, techne, wisdom, meaning) that exceeds the capacities of an individual mind, and thus suggests the collective rules. To worry about this is valid, but to fear it is perhaps already an ideological choice that favours both an individualist and simultaneously hierarchical opportunist thinking: that promotes the good of one over the well-being of all. Marx offers a notion of the general intellect. This might be taken as a simile of A.I., if we allow that science fiction is a fantasy projection of real world concerns into space.If so, isn’t it the case that fear of robotics is the distorted manifestation of fear of a planned economy that would harness the general intellect for the good of all. The struggle over new media today is also about the deployment of ‘artificial’ – general – intelligence in the service of some (corporate power) or all (planned economy). So far the robots are caught within Asimov’s constraints.

What Galactica does is add a gods-bothering dimension to this A.I. – which for mine is the equivalent of touching faith in open source. The parameters of individualism and hierarchy are not thereby disrupted.

Maybe we are obsolete. The survivors on New Caprica, struggling to breed and scratching in the dirt, are dehumanized, life becomes barely worth living, suicide attacks become plausible (when the Cylons occupy). Only the organised rebels have agency, and yet they too send their own to death.

Can we argue that where Bladerunner and the later Alien films displace race issues into a blaming of the corporation (Tyrell Corp, The Weyland-Yutani Company) for greed, opportunism, evil, Galactica instead illustrates a later digital mode of the same argument, with corresponding post-apocalyptic mode of production and power? The reimagined, digital new model Cylons have potentials that belong to what many would call totalitarian, but with a general intellect, a planned total economy, decision making by think tank cabals, and shiny slick friends… spuriously called toasters by the obsolete humanoids.

The question for the humans faced with extinction then has to do with Deckard’s old fashioned bad cop complicity/opportunity syndrome – do you kill all replicants without remorse, or look for your chance to escape on your own (with Rachel)?