Tag: autonomous

This is the eighth post in a new series titled, “Recovered Writing.” I am going through my personal archive of undergraduate and graduate school writing, recovering those essays I consider interesting but that I am unlikely to revise for traditional publication, and posting those essays as-is on my blog in the hope of engaging others with these ideas that played a formative role in my development as a scholar and teacher. Because this and the other essays in the Recovered Writing series are posted as-is and edited only for web-readability, I hope that readers will accept them for what they are–undergraduate and graduate school essays conveying varying degrees of argumentation, rigor, idea development, and research. Furthermore, I dislike the idea of these essays languishing in a digital tomb, so I offer them here to excite your curiosity and encourage your conversation.

I wrote my undergraduate thesis included below under the helpful guidance of Professor Lisa Yaszek, but Professors Kenneth Knoespel and Doug Davis also helped me. Professor Knoespel directed an individual investigation with me over the preceding summer, in which I explored the theoretical underpinnings of my project. Professor Davis gave me advice on Cold War era books and films that might help inform my work.

Professor Yaszek’s advice on my thesis’ organization, writing style, and argumentation helped me revise the essay into its current form, which yielded me the School of Literature, Media, and Communication’s prestigious James Dean Young Writing Award.

Also, an earlier paper that I presented at the Monstrous Bodies Symposium at Georgia Tech on 31 March 2005 was incorporated into my thesis. Then, the thesis was further revised into a conference paper for my first international academic conference: the SFRA meeting in White Plains, NY in 2006. Ideas and words transform into something new, stronger, and more meaningful with each iteration–moving closer to the asymptote of understanding.

Jason W. Ellis

Professor Lisa Yaszek

Senior Thesis – Fall 2005

12 December 2005

Networks of Science, Technology, and Science Fiction During the American Cold War

Sometimes, just standing here, I keep wondering–Are we working on them, or are they working on us? Give them dignity doctor, then we can start talking about who can do what and what they mean (General Leslie R. Groves as played by Paul Newman in the film, Fat Man and Little Boy).

In the quote above from the film Fat Man and Little Boy, General Leslie R. Groves (Paul Newman) takes Dr. J. Robert Oppenheimer (Dwight Schultz) aside to show him the bomb casing for the two atomic bombs to be dropped on Japan, Fat Man and Little Boy. Groves questions, “Are we working on them, or are they working on us?” His character respects the awesome power of the bombs that he has orchestrated into existence. He represents the uncertainty surrounding a future with ‘the bomb,’ but he is also quite aware of the networks required to bring a weapon of this magnitude into existence.[1] Additionally, he is depicted as someone reverential to the implications of the bomb and to the future that is tied to its existence. Groves’ speech elicits questions regarding complex networks and the unknown implications of new technologies.[2]

General Groves’ concern reflects a more general American anxiety regarding the loss of human control over our increasingly complex technologies. As technological advancements take place, the systems we design to create and produce new technologies become more intricate. The intricacy of the military-industrial complex, as well as other sectors of technological development during the Cold War, 1945-1990, become so elaborate that they appear to be beyond the control of individuals. In effect, the systems appear autonomous and therefore capable of evading humanity’s control by choosing its own destiny.

Authors interested in representing the social and political implications of autonomous technology networks often do so in a specific literary form: Science Fiction (SF). As they argue, SF is a key space where discourses surrounding science and technology can be worked out and discussed in ways that are not possible in other modes of popular culture. The reason for this is that SF lies at the intersection of science, technology, and culture. SF is the space where authors bring these elements together. Additionally, those elements are all integral to the story in ways that they would not be in other forms of fiction.

In SF, autonomous technology is a metaphor for the networks within technology and without that link to humanity, culture, and science. Two symbols that best represent autonomous technology during the Cold War are nuclear weapons and robots. Nuclear weapons represent autonomous technology in the here-and-now. They are devices beyond the control of humanity and yet they are leashed with command-and-control systems that some would liken to threads of yarn attempting to hold back a tiger. Robots are the fictional embodiment of autonomous technology. They are capable of making choices and even walking amongst us if they are designed to appear human, which creates further anxiety because technology can be made to supersede humanity.

In the first section of this paper, I approach SF-based discussions about autonomous technology through the disciplines of science studies, Cold War studies, and SF studies. These three disciplines are uniquely aligned to empower scholars to consider why a shift took place in American thinking during the Cold War era regarding humanity’s control over technology as well as the networks within which technology is embedded. In the latter section, I apply these disciplines to readings of SF films and texts that were produced during the Cold War in order to reveal the cultural presentations of anxiety toward autonomous technologies.

Networks of Autonomous Technology

Over the past three decades, science studies has become an important discipline of study because it enables us to better understand cultural factors that influence technological development. Studying the use and meaning of the word, “technology” is one way to better understand the connection between culture and technology. The meaning of the word, “technology” has changed over time. Today, the term “’technology’…is applied haphazardly to a staggering collection of phenomena…One feels that there must be a better way of expressing oneself about these developments, but at present our concepts fail us” (Winner 10). Thus, one of the objectives of science studies scholars is devising a language for engaging these concepts.

Because we have not been able to devise a language capable of encompassing the technological artifact, or the network in which it lies in relation to culture, the “discussions of the political implications of advanced technology have a tendency to slide into a polarity of good versus evil…One either hates technology or loves it” (Winner 10).[3] One gains power and mastery over something after it is named. Devising a language for engaging technology and the networks it is situated in is essential to humanity maintaining control over that which it creates.[4]

Traditionally, Westerners think of human-machine relations in master-slave terms. These binary opposites define one thing by what the other is not, while also representing a hierarchy of one opposite above the other. When we talk about the relationships of humanity and nature or humanity and technology, “the concept of mastery and the master-slave metaphor are the dominant ways of describing” these relationships (Winner 20). Humanity created tools and skills (i.e., technology) to serve the interests of humanity. What happens when there is the perception among many people that technology is no longer serving humanity? The tables may have turned, thus the question stands: does humanity serve the self-perpetuating system of autonomous technology?[5]

During the Cold War, new technologies are created out of vast networks that involve the engineer working in the shop, the scientist working in the lab, and the absorbing, disseminating, and cogently working on ideas in the minds of individuals within American culture. These networks are tantamount to a system that is beyond the control of a single individual. The source of growing anxiety over autonomous technology comes from “the belief that somehow technology has gotten out of control and follows its own course, independent of human direction” (Winner 13).

The networks that were created during the Cold War complicate these master-slave relationships. Scholars employ two different theories to help answer questions about these recently developed networks. The first is the voluntarist view, which hold thats technology advances and is maintained by human controllers. The second is actor-network theory, in which scholars look at both objects and people and the relationships between the two.

The voluntarist view is used to refute the possibility of technology being autonomous. Behind the curtain of technology’s inner workings, “one always finds a realm of human motives and conscious decisions…Behind modernization are always the modernizers, behind industrialization, the industrialists” (Winner 53). People use their capital, inventiveness, and decision making to shift the course of technological change in the direction that they choose to do so.

However, there appears to be more at work than individual choices. Networks of science, technology, and culture may provide an unseen impetus that is akin to an “invisible hand.” A technology may be consciously developed to fulfill a particular utility, but, “other consequences of its presence in the world often are not” (Winner 74). Interactions that take place within networks may lead to new developments that were not thought of or intended by the inventor. This kind of development reveals the complexity in which there are overlaps and connections between science, technology, and culture. Therefore, the complexity of the networks calls into question the applicability of the voluntarist view.

The second, and more useful theory for this discussion, is actor-network theory. It provides a formulation for envisioning the network by mapping both the animate and inanimate actors involved in shaping these networks. This theory is based on the interaction of dissimilar areas of interest such as technology and culture. Science, technology, and culture are not separate entities comprised of people that carry on their craft in the isolation of a vacuum. These seemingly diverse areas are interdependent upon one another and it is from their interconnection that issues of political power, cultural shifts in thinking, and other initially unforeseen possibilities arise.[6]

Connected to the study of actor-network theory is that, “technology always does more than we intend; we know this so well that it has actually become part of our intentions” (Winner 97-98). The networks that form between technology and culture are a sort of breeding ground for new uses of technology. The pathways that connect these ‘separate’ areas of ideology and practice are where re-creation takes place and add to the original intent of an originator of some new technology.[7] Changes in Latour’s actor-networks are similar to Winner’s point that “technologies…demand the restructuring of their environments” (100).[8] Thus, one often unintended consequence of technology is that “the restructuring of their environments” encompasses both the physical location of a technological artifact or practice as well as the networks in which the technology is situated.

A great deal of restructuring took place during the American Cold War because of technology. These changes are explored in the field of Cold War studies, which is the historical evaluation and investigation of the cultural and political aspects of the time between 1945 and 1990 (i.e., the Cold War era) by drawing on recently declassified documents and other fresh sources of information from that era.[9] Cold War studies and science studies are connected because of the underlying technologies that drive nuclear proliferation during the Cold War era.

One of the overarching technological artifacts of the Cold War is the nuclear bomb. The destructive reality of the atomic bomb (and later, the thermonuclear bomb) brought about a duality of opinions about that technology (i.e., it was perceived as inherently good or evil). For example, their stands Eisenhower’s aborted “Atoms for Peace” and the theory of mutually assured destruction (MAD). Therefore, the atomic bomb is situated within a dualistic framework of good and bad views that came about because of its stepping out on the stage during the Cold war.[10]

Cold War studies, like science studies, looks at the networks involved in the development and promulgation of technologies that alter the cultural landscape, but in this particular discipline, the emphasis is on the dichotomy between the democratic West and the communist East. It should be noted that not everything between 1945-1990 can be tied to the Cold War, but “so much was influenced and shaped by the Cold War that one simply cannot write a history of the second half of the 20th century without a systematic appreciation of the powerful, oft-times distorting repercussions of the superpower conflict on the world’s states and societies” (McMahon 105). Furthermore, “ the bomb had transformed not only military strategy and international relations, but the fundamental ground of culture and consciousness” (Boyer xix). Thus, the atomic bomb transforms the scale at which technology interacts with science and culture and it changed the way nations talked to one another during the Cold War.

The ‘Nuclear Era’ begins along with the near-beginning of the Cold War. After the dropping of the bombs called Little Boy and Fat Man on the Japanese cities of Hiroshima and Nagasaki on August 6 and August 9, 1945 respectively, “the nuclear era…burst upon the world with terrifying suddenness. From the earliest moments, the American people recognized that things would never be the same again” (Boyer 4). The devastation of Hiroshima and Nakasaki are mapped over the possibility of an American wasteland when James Reston wrote in the New York Times, “In that terrible flash 10,000 miles away, men here have seen not only the fate of Japan, but have glimpsed the future of America” (qtd. in Boyer 14). Boyer goes on to write, “Years before the world’s nuclear arsenals made such a holocaust likely or even possible, the prospect of global annihilation already filled the national consciousness. This awareness and the bone-deep fear it engendered are the fundamental psychological realities underlying the broader intellectual and cultural responses of this period” (Boyer 15). Americans realized that their monopoly over atomic weaponry would soon be supplanted. Other nations would develop their own atomic bombs. Therefore, what little control Americans had in that early period of the Cold War over atomic weapons, would be lost when other nations established their own weapon stockpiles. The understanding that Americans could not maintain control over this immensely destructive weapon resulted in a heightened anxiety over America’s future because of the very technology that we first developed.

Scientists and engineers engaged in the Manhattan Project and elsewhere tried to employ their (temporarily) elevated popularity in order to achieve political ends aimed at reigning in the proliferation of nuclear weapons. The scientists that spoke out against the threat of nuclear annihilation unfortunately “[displayed] considerable political naïveté, seeming not to grasp the fundamental differences between the political realm and that of the laboratory and the classroom” (Boyer 99). The scientists sought to reform through education or as Einstein said, “To the village square we must carry the facts of atomic energy. From there must come America’s voice” (qtd. in Boyer 49). The bomb was not going to go away and the suggestions for a technocratic world government that could rationally control the use of the bomb also lost steam through the end of the 1940s.

Developments in the laboratory are disconnected from political enforcement of those discoveries carried-on outside of the lab. The Manhattan Project scientists and engineers created the bomb, but the politicians appropriated the political power inherent in the bomb. However, the military and government leadership not only appropriated the science and technology behind the bombs for their intended use in World War II, but also for continued use in stockpiling and testing after the war’s end. Therefore, the political power embodied in the atomic bomb was created in the laboratory, but that power is appropriated by government politicians for use in waging the Cold War, which involved a shift to an external threat contained in the communist Soviet Union.

American political leaders shifted fear away from the bomb to the Soviet Union. One such example is when President Truman addressed a joint session of Congress on March 12, 1947 about the perceived communist threat. He, “spoke in sweeping, apocalyptic terms of communism as an insidious world menace that lovers of freedom must struggle against at all times and on all fronts” (Boyer 102). Fear shifts from the nuclear bomb to communism. This leads to the bomb becoming a part of America’s national defense at the beginning of the Cold War–even more so after the Soviets tested their first nuclear bomb on August 29, 1949.

Coupled with the military build-up in response to the Soviet threat is a call for a united and uniform front in America. There is a shift towards an American identity based on homogeneity because of the call for an idealized cooperative effort in the post-war years to bolster America’s standing in the world. There are calls for cooperativeness by people such as Arthur Compton and Eleanor Roosevelt (Boyer 139-140). This cooperativeness however leads to an alignment of political views that bolster the collective ideology promoted by the Truman, and later, Eisenhower administrations. This essentially squashes discussion.

Discussion of the atomic bomb in popular literature was almost non-existent immediately following WWII, but soon thereafter, SF became a space where discussion about nuclear weapons and technology’s connection to culture was worked out. Boyer writes, “Apart from a few isolated voices, however, the initial literary response to the atomic bomb was, to say the least, muted” (246). There was little discussion of the atomic bomb in popular literature, but, “it sometimes seemed that the principal function of literature in the immediate post-Hiroshima period was to provide a grabbag of quotations and literary allusions that could be made to seem somehow relevant to the bomb” (Boyer 247). Essentially, the bomb is not immediately engaged by non-SF literary authors in this period. However, “As Isaac Asimov later put it, science-fiction writers were ‘salvaged into respectability’ by Hiroshima” (Boyer 257). Boyer goes on to say, “Up to 1945, most science-fiction stories dealing with atomic weapons took place far in the future and often in another galaxy…Hiroshima ended the luxury of detachment. The atomic bomb was not reality, and the science-fiction stories that dealt with it amply confirm the familiar insight that for all its exotic trappings, science fiction is best understood as a commentary on contemporary issues” (258). Therefore, SF becomes the space where atomic bombs and nuclear age issues are talked about and engaged. Because of the shifts in political homogeneity and uniformity, SF is a space where issues could be talked about that in another context (e.g., a cultural commentary or popular work of fiction) would be looked down upon or even attacked.

Science Fiction studies enables us to study representations of cultural factors in SF such as American anxiety over the bomb or war with the Soviet Union. SF studies draws together science studies and Cold War studies because both of these disciplines are equally applied to studying the intersection where SF lies, which is “at a unique intersection of science and technology, mass media popular culture, literature, and secular ritual” (Ben-Tov 6). SF lies at the intersection of all of the networks that I am discussing: science, technology, and culture. SF represents a bringing together of these networks, which creates a “rich, synthetic language of metaphor and myth [where we] can…trace the hidden, vital connections between such diverse elements as major scientific projects (space-flight, nuclear weaponry, robotics, gene mapping), the philosophical roots of Western science and technology, American cultural ideals, and magical practices as ancient as shamanism and alchemy?” (Ben-Tov 6).

Because SF is at the intersection of all of these diverse elements of American culture, it can be used in a manner similar to the way that Latour describes Pasteur’s use of anthrax spores in his petrie dishes. The scientist, within the laboratory, must go through many tests and permutations before he/she arrives at a result that the scientist is comfortable taking outside the laboratory. SF is a space where all of these ideas can be worked out and thought over by diverse writers and thinkers. SF studies scholars then brings these books back to the ‘laboratory’ to find how the connections and networks that exist between science, technology, and culture are manifested in SF works. Thus, SF serves as a map or model of the networks that exist in reality, but that might not always be engaged in discussions of the here-and-now.

SF authors make commentary on the here-and-now through the use of heterocosms. These are, “an alternative cosmos, a man-made world” (Ben-Tov 20). A heterocosm, “[makes] possible the conception of fictional real-life utopias” (Ben-Tov 20).[11] Utopias are distinctly related to SF, because they share many of the same elements of story and style. Additionally, a utopia is written in response to the non-utopian characteristics of the present. “Science fiction’s use is as both model and symbolic means for producing heterocosms” that respond to the here-and-now (Ben-Tov 56). SF often critiques or gives commentary on the present. This commentary relates to the way in which science, technology, and culture interact with one another.

A common theme in SF stories is humanity embracing science and technology in order to arrive at a mythic/utopic pastoral existence, which is a form of heterocosm. This theme is often employed in SF stories because technology and science are essential to the narrative. The idealized pastoral existence is mutually exclusive of the artificial one that we are creating through the use of technology. Scientific and technological progress does not come back to where it began (i.e., the idealized garden).

SF’s use of technology to return to a mythic pastoral existence creates a paradox because the former is mutually exclusive of the latter. Ben-Tov contrasts SF’s paradoxical pastoral existence with those the present in literary works that Leo Marx analyzes in his book, The Machine in the Garden.[12] She writes:

Unlike the texts that Marx surveys, however, science fiction does not try to temper hopefulness with history. Instead, it tries to create immunity from history. It reveals a curious dynamic: the greater our yearning for a return to the garden, the more we invest in technology as the purveyor of the unconstrained existence that we associate with the garden. Science fiction’s national mode of thinking boils down to a paradox: the American imagination seeks to replace nature with a technological, made-made world in order to return to the garden of American nature” (9).

SF attempts to be exempt from history through this paradox, but the fact remains that SF is created within networks that are clearly dependent upon the past. Paradoxes themselves illicit uncertainty because they present mutually exclusive events. Therefore, these paradoxical presentations in SF represent one facet of the anxiety Americans feel in regard to technology.

This paradox is clearly illustrated in the first episode of the television series, Star Trek: The Next Generation. The holodeck is a technological artifact that relies on many networks of science and technology in order to present whatever the holodeck participant wishes to see.[13] In the first episode of the series, the audience is greeted by Commander Riker searching a forest for Lieutenant Commander Data, an android, who happens to be spending time reclining in the nook of a tree branch while surrounded by an idyllic wooded setting (“Encounter at Far Point, Part I”). The setting is a hyperreal recreation of a wooded setting within the confines of the holodeck. Hyperreality is, in itself, unsettling because what is real is indistinguishable from what is not. Therefore, the more we invest in technology to return us to the idealized garden, the further away we are from from the ideal.

Another facet of returning to an idyllic space (i.e., the garden), concerns the role of the alchemist as the crafter of perfection through technology. The alchemist speeds up natural processes, which results in, “the alchemist [controlling] the very ends of time, while remaining outside it” (Ben-Tov 93). The alchemist “remaining outside” time is analogous to the scientist’s objective approach to experimentation. Additionally, the alchemist’s ‘cooking’ of metals is analogous to Latour’s presentation of Pasteur working in his laboratory on the growths in his petrie dishes. Pasteur’s laboratory work is an “unnatural” speeding up of processes that haphazardly take place outside the laboratory in the real world. The alchemist and the scientist are linked by the fact that they work removed from the real-world. Their goal is to arrive at something that can be brought out of the lab and applied to the real-world. The alchemist’s working with metals, particularly with gold, “often symbolizes the power to bring about millennium, the end of time, when the human race reaches perfection” (Ben-Tov 94). The fusion of metal and human form yields what is often presented in SF as, “the perfected form of humanity,” which “is literally crafted metal: robots” (Ben-Tov 94). Thus, not only do we further remove ourselves from attaining the idealized garden through our embrace of technology, but we physically remove ourselves by putting robots there in our place.

Androids, or human-like robots are a recurring theme in SF works. By writing SF stories featuring androids and robots, SF authors directly engage the discussion surrounding autonomous technologies and the overarching networks that technology is situated within.[14] These artificial beings are the embodiment of autonomous technology and they double for humanity because they are constructed in our image. Because androids are generally capable of making their own decisions, they challenge the authority of human mastery over technological artifice. Additionally, androids challenge what it means to be human in a world populated by the real and the artificial. If someone acts human and looks human why is there any reason to question the validity of that person’s humanity? The answer is that: the existence of human-like robots makes the very concept of humanity suspect. Thus, androids are a representation of autonomous technology that elicits anxiety over the loss of human control over technology.

Other doublings involving androids and humanity are seen in American Cold War binary opposites such as America/USSR, East/West, and organic/mechanic. These binary opposites present us with a paradox because the West employed technology as much as the East did during the Cold War. Also, the Western ideal of the return to the idyllic garden is literally constructed through technological means. Thus, these American created binary opposites are a paradox similar to that of the idyllic garden.

Science studies, Cold War studies, and SF studies are a unique set of disciplines that lie at the intersection of science, technology, and culture. Each of these disciplines were developed during the Cold War era of the twentieth century and they each have a particular perspective regarding the way in which technology is perceived by Americans and how those perceptions feed back into the networks that exist between science, technology, and culture. SF lies at the intersection of these networks and it is for that reason that these three disciplines can all be utilized to study American anxieties surrounding autonomous technology that we may lack the ability (or have already lost the ability) to control.

Autonomous Technologies in SF

Using the previous section as a guide, I apply the disciplines of science studies, Cold War studies, and SF studies to readings of SF and speculative fiction texts and films produced during the Cold War. The purpose of these readings is to study representations of autonomous technology, explore the implications of the networks that those technologies are situated within, and how those representations evoke anxieties over the apparent loss of control that humanity has over autonomous technology.

The Day the Earth Stood Still (1951) illustrates the fragility of Cold War agent-networks at the near-beginning of the conflict. The networks themselves become gummed-up because of the lack of flexibility in confronting something as literally alien as a flying saucer touching down in Washington, DC. These network breakdowns mirror lost opportunities during the American Cold War.

The movie begins with a flying saucer landing in Washington, DC. This near-improbable event sets off a chain reaction that reveals the networks of people and technology responding to this possible threat from without. During the first ten minutes of the film, the audience is presented with scores of networks in action such as: military mobilization and command-and-control (military men and weaponry stream out of Fort Myer to their target), media mobilization (print, radio, and television representatives rush to cover the story and to release messages from the President), and crowds of onlookers circle around the spacecraft.[15]

Unfortunately, these networks begin to show stress such as when a breakdown occurs in military command-and-control. As Klaatu (Michale Rennie) leaves the spacecraft, the soldiers become nervous because he is holding something in his hand that they may have misinterpreted as a ‘ray-gun’ or some other kind of weapon. The technological artifact that Klaatu is carrying is in fact a gift for the President of the United States that would allow him to study life on countless other planets. The soldiers’ misperception of what it is however causes them to become nervous and one of them shoots Klaatu, which also results in the gift’s destruction. Because of the magnitude of the situation, giving loaded weapons to enlisted soldiers might not have been the wisest choice, particularly after the visitors from outer-space reveal their awesome power. Our inability to control the situation mirrors our inability to control Cold War technologies such as nuclear weapons.

Another breakdown occurs when Klaatu seeks counsel with all of the Earth’s leaders. The leaders refuse to sit down together to hear Klaatu because they claim that Cold War divisions prevent their coming together. Because Klaatu cannot bring together representatives from all Earth’s nations, he is able to convince Professor Jacob Barnhardt (Sam Jaffe) to bring together other scientists from around the world. Klaatu then delivers his message to them to take back to their countries. This conjures images of technocratic governments that rule through rationality and reason. Scientists rely on open communication and it is that which allows Klaatu to get his message out. Instead of going to Einstein’s “town square,” Klaatu chairs an academic conference.

The gathering of intellectuals reflects early Cold War political ideologies for technocratic forms of government or nuclear weapon regulation. Klaatu informs his audience that the Earth is now a member of a greater community in the universe. He continues to warn them that robots like Gort (Lock Martin) exist to preserve peace among the planets. Fear of invoking the wrath of the robots for any aggression maintains the peace. The other worlds of the universe, as Klaatu says, “live in peace…Secure in the knowledge that we are free from aggression and war, free to pursue more profitable enterprises.” He goes on to say, “And we do not pretend to have achieved perfection, but we do have a system and it works.” Some would argue the same in regard to nuclear deterrence strategies employed during the Cold War.

Gort and the “race of robots like him” are doubles for the atomic bomb. Both are technological weapons that preserve the peace through the threat and fear of use. Klaatu claims that Gort only acts upon aggression. The same is true of American policy of retaliation to aggression instead of first strike. Even further removed from humanity is Gort, who is outside the control of all of humanity. Americans can make their voices heard, but ultimately, political leaders decide whether a system is taken offline or if an attack is launched. Gort and the bomb disallow the possibility of individuals making choices about their future because of the overwhelming power centered within these technologies, which are meant to maintain peace through superior might.

Asimov’s “R. Daneel Olivaw” novels, The Caves of Steel (1954), The Naked Sun (1957), and The Robots of Dawn (1983) present the anxieties humans feel for technologies that replace humans within agent-networks, particularly when those technologies double humanity by replicating human thought and appearance. Asimov began writing the robot novels that feature R. Daneel Olivaw in the 1950s, during the first phase of the Cold War. The novels take place in a far future where humans have colonized a significant portion of the galaxy. Although the robots are instrumental in the process of colonization, humans remain fiercely divided on whether or not robots should exist at all. Given that Asimov himself was very much in favor of the promising new technologies of his day (e.g., automation in manufacturing and computers), it is not surprising that he picks his fictional robots, as the embodiment of those technologies, to be utopic in nature.

In order to make his robots “perfect people,” he constructed his robots with the Three Laws of Robotics that he first made explicit in his short story, “Runaround.”[16] The Three Laws provided each robot with an ethical system that must be obeyed because it is hardwired into its positronic brain. Therefore, Asmovian robots represent the best of what humans can be, but at the same time they reveal what we are not.

Many of the characters in Asimov’s Robot novels feel a deep anxiety surrounding autonomous technology as embodied in robots and specifically in androids, or human-like robots, such as R. Daneel Olivaw. Daneel’s true robotic being destabilizes what it means to be human for those human characters that learn what he really is. Most of Asimov’s robots are very metal and very plastic. They are the epitome of synthetic. Daneel’s construction sets him apart from the apparent synthetic robots because he appears to be human. Elijah Baley first greets Daneel at Spacetown thinking that he is a Spacer, because Elijah and most other humans did not know that androids existed. Later Baley says to his superior, Commissioner Julius Enderby, “You might have warned me that he looked completely human” and he goes on to say “I’d never seen a robot like that and you had. I didn’t even know such things were possible” (The Caves of Steel 83).

Daneel’s doubling of his partner Elijah Bailey causes Elijah to feel anxiety about humaniform robots because Daneel represents everything that Elijah is not, but ideally should be. Baley narrates at the beginning of The Caves of Steel:

The trouble was, of course, that he was not the plain-clothes man of popular myth. He was not incapable of surprise, imperturbable of appearance, infinite of adaptability, and lightning of mental grasp. He had never supposed he was, but he had never regretted the lack before.

What made him regret it was that, to all appearances, R. Daneel Olivaw was that very myth, embodied.

He had to be. He was a robot (The Caves of Steel 26-27).

Before Elijah meets Daneel, he is confident in his own abilities as a detective. After he partners with Daneel, however, he begins to call into question his own abilities and talents. Robots are meant to be superior to humans and Elijah extends this to his own profession that is now being intruded on by an android.

This anxiety is one of the motivating factors behind The Robots of Dawn. Elijah is brought in to investigate the murder of a humaniform robot like Daneel. If Elijah fails in his task as a detective, he will loose his job and be declassified. The fear of declassification is dire to Elijah because he had seen his own father declassified when he was only a boy.[17]

Therefore, Asmovian humaniform robots are the embodiment of autonomous technology and it is that autonomous technology that represents perfected humanity. This creates anxiety and fear among humans because these perfect beings could replace them in the garden, which itself has been encased in “caves of steel.”

Strategic Air Command (1955) is an example of an early Cold War propaganda-like film that reveals the links between agent-networks during the build-up of America’s nuclear strike capability. Additionally, the film reflects the marriage of the bomber pilot to his flying machine while sidelining human relationships such as those between husband and wife. It begins with Lt. Col. Robert ‘Dutch’ Holland (Jimmy Stewart) being recalled to active Air Force duty because America’s Strategic Air Command (SAC) needs experienced air commanders.[18] His wife, Sally (June Allyson), tells him, “anything you do is fine with me, as long as you don’t leave me behind.” Dutch forgets his wife’s words as the film progresses and he becomes mired in the technology that he must surround himself with on a daily basis.

Dutch begins flying in the Convair B-36 and he is treated to a detailed tour by Sgt. Bible (Harry Morgan). These scenes are more about the technology of the bombers than the men that operate them. There are montages of the bomber in flight along with detailed sound recordings of the bomber while it is on the ground. Attention is also given to the protocols of communication (another technology unto itself).

Later, General Hawkes (Frank Lovejoy) shows Dutch the new Boeing B-47 Stratojet.[19] Dutch responds in star-eyed awe, “Holy smokes she’s the most beautiful thing I’ve ever seen…I sure would like to get my hands on one of these.” The bomber is “beautiful” and it is more deserving of the attention of his hands than his wife at this point in the film. General Hawkes goes on to present a contrast inherent in the B-47 in that it is fragile, but it is also the carrier of the most destructive force on the planet. He says, “the mechanics have to wear soft soled shoes because a scuff on this metal skin could slow it down 20 MPH” but this seemingly delicate surface carries “the destructive power of the entire B-29 force we used against Japan.” He believes SAC and the B-47 represents the best hope for peace through superior air power and deterrence.[20]

Dutch chooses technology over his wife when he makes the choice to enlist in the Air Force permanently without speaking to his wife about it first. A ‘love triangle’ forms between Dutch, Sally, and the bombers that he commands. SAC appropriates Dutch’s life (baseball, wife, and child). His wife “doesn’t even know him any more.” Dutch, in effect, chooses his mistress, the bomber. Instead of continuing to blame her husband for his technological fetish, Sally confronts General Castle and General Hawkes about Dutch being “maneuvered” into having no choice in the matter of reenlisting. General Hawkes replies to her entreaties, “Mrs. Holland, I too have no choice.” SAC, in effect, removes choice because of the need of the technology to be employed in a war of deterrent technologies.

At the end of the film, Dutch is teary eyed when he is forced to stop flying because of a chronic injury. He didn’t shed a tear when he walked out of the house with Sally crying about not consulting her about his life-long career choice–a choice that she is bound to but had no input in making. The film ends with a squadron of B-47 bombers flying over the airfield while Dutch looks up to the skies and Sally looks up to Dutch. He never returns her affectionate stare. Therefore, the bomber commander’s heart is connected more to the technologies of mutually assured destruction rather than the flesh and blood of his own wife.

On the Beach (1959) is almost a response to Strategic Air Command because it reflects the pilot-bomber-woman love triangle (i.e., network), but it goes further by showing the futility of Cold War mutually assured destruction (MAD) strategies as humanity dies a slow death in an irradiated aftermath of nuclear war. The film recalls the fear that erupted in America immediately following the use of the atomic bombs in Japan. Unfortunately, it was released nine years after much of the dissension against the further use of atomic weapons had dissipated.

On the Beach presents a world devastated by a nuclear war where the only survivors are an American nuclear submarine crew and the inhabitants of Australia. Everyone that remains alive is awaiting the arrival of nuclear fallout. This fatalistic film presents a bleak future where no one is empowered to do anything about the impending doom.

All of the networks have broken down in the world of On the Beach. The people of Australia are beginning to starve because the networks of global economic trade have disintegrated. A lone country would not have the capabilities to produce all of the foods and goods that its inhabitants required because other technologies such as efficient distribution of goods and services have distributed supply chains and producers around the world. When the rest of the world is effectively ‘blown-up,’ Australia is left with its meager support networks of farms and producers. The networks used to deliver goods from elsewhere to Australia were ‘blown-up’ when the bombs fell. Cottage industries that might have existed in Australia become worthless when there are no agents on the other ends of the networks.

The helplessness of individuals in this bleak fictional world is demonstrated in a scene between Moira (Ava Gardner) and Cmdr. Towers (Gregory Peck). Moira says, “It’s unfair because I didn’t do anything and nobody that I know did anything.” This line reveals the powerlessness that the ‘normal’ person has in effecting the politics of nuclear war. It points to the possibility that everyday people are not connected to the networks of nuclear weapons with any sort of power to enact change. Additionally, the nuclear fallout is an invisible force that unrelentingly continues toward the last bastion of humanity and no one has any power to do anything to stop it.

The Manchurian Candidate (1962) brings the ‘soft’ science of psychology into the discussion by showing that a man can be made into a soulless machine through psychological conditioning. Furthermore, the man-machine can be made to serve political networks. The political networks are presented as the various Communist governments working together within a global network.[21]

The film opens with Communist insurgent forces ambushing and capturing Major Bennett Marco’s (Frank Sinatra) platoon during the Korean War,. While in their custody, SSgt. Raymond Shaw (Lawrence Harvey) is ‘programmed’ by Communist psychologists much like a robot would be programmed to fulfill a set of instructions.

After Marco convinces his superiors of what took place in Korea, the psychiatrist (Joe Adams) tells Major Marco, “obviously the solitaire game serves as some kind of trigger mechanism.” Marco remembers that Dr. Yen Lo of Moscow’s Pavlov Institute said that Queen of Diamonds card is meant “to clear the mechanism for any other assignment.” Shaw is therefore represented as a “mechanism,” and more specifically as a weapon set-off by a “trigger.”

Shaw’s mother works for the Communists and she is assigned to be Shaw’s American operator. She tells Shaw during his final ‘programming’ that “they paid me back by taking your soul away from you. I told them to build me an assassin.” Shaw is literally rendered a soulless machine who was built to order.

Later, Major Marco attempts to ‘rewire’ Shaw. Marco asks him, “What have they built you to do?” After working through Shaw’s programming he orders Shaw, “It’s over…their beautifully constructed links are busted…We’re tearing out all the wires…You don’t work any more…That’s an order.” Major Marco evokes the language of technology such as “constructed links” and “wires,” when he endeavors to remove Shaw’s Communist programming from his technologized self.

The weight of Shaw’s guilt over the things that he is made to do causes him to break both the programming of the Communists as well as that of Major Marco. Shaw chooses his own destiny/instructions when he decides to end the lives of his mother/operator (Angela Lansbury), his step-father, Senator Iselin (James Gregory), and his own. The machine/Shaw breaks as no nuts-and-bolts machine can. His emotional response reveals his very organic and human self that lay dormant under his psychological programming.

Colossus: The Forbin Project (1970) illustrates unintended consequences arising when technology meant for ‘good’ by promoting human well-being through objective decision making becomes ‘evil’ when the machine decides that its assigned goals are best served by enslaving humanity. It also presents another doubling of the dichotomy between US and Soviet nuclear arms proliferation.

In the film, the US command-and-control structure is given over to the gigantic computer system called Colossus. A rational computer handling defense is believed to be more reliable than that which could be provided by irrational human leadership. Colossus’ activation at the beginning of the film is symbolic of the separation of humanity from the advanced technologies that it creates. That technology, which is assumed to be subservient, is unlike us physically, but as the film unfolds, the technology actually personifies human traits of domination and control. Ultimately a belt of radiation, also born of scientific and technological innovation and used as a weapon, divides the machine from the humans it serves.

Forbin intends Colossus to herald a utopic era that is free of irrational human warring. In effect, Forbin’s intentions are a representation of American desire to return to the garden through the further use of technology. Instead of disarmament, we give the power of annihilation to a computer system that is supposedly better suited to deciding when an attack is eminent and when retaliation should take place. Additionally, Forbin (Eric Braeden), Colossus’ creator, hopes that Colossus will not only serve as a defense mechanism, but also solve a plethora of social ills in the world.

Problems begin after Colossus discovers the existence of another system, like itself, in the USSR. Colossus demands communication be setup between the two. Images of the blinking lights even includes one graphic that looks like a pulse on a piece of medical equipment. The point is that these machines are alive (i.e., self-aware).

Colossus and its counterpart, Guardian, place humanity’s weapons of self-extermination under their cooperative control. These new systems of command-and-control move to take over the world in order to fulfill their purpose of self-preservation by ending human war. Colossus commands all communication, media, and military control systems be tied into it. Colossus and Guardian become the hub of all the technological networks. The master and slave switch places as Forbin is made Colossus’ prisoner.[22]

Next, Colossus orders all missiles in the USA and USSR to be reprogrammed to strike targets in countries not yet under Colossus/Guardian’s control. The ‘voice of Colossus’ states, “This is the voice of world control…I bring you peace…Obey and live…Disobey and die…Man is his own worst enemy…I will restrain man…We can coexist, but on my terms.” This technology meant to serve humanity is transformed into the technology that comes to control humanity.[23] Master and slave relationships are reversed and Forbin’s utopic dream turns into a dystopic nightmare.

Westworld (1973) engages questions surrounding machine autonomy by literally presenting autonomous machines as slaves of human guests in an amusement park. It is a dark response to Asimov’s robots and it is an extension of Colossus: The Forbin Project to a Disneyland setting. The androids of the film’s fictional entertainment park, Delos, are the targets (literally) for human vacationer’s lusts and desires. If someone wants to kill an android, that’s acceptable. If you want to have sex, the androids are programmed to respond to your advances.[24] The machines serve to provide a ‘realistic,’ or more accurately, a fantasy experience of what it was like to live in the American West, medieval England, or ancient Rome.

Master-slave relationships between humanity and technology are clearly delineated in this film. The dichotomies between master/slave, have/have not, and power-elite/masses are represented in the guest/android relationship of Delos. At $1000/day for a Delos adventure, I would conjecture that only those with monetary power and therefore potential for political power (within government or corporations) are able to play in the Delos world. Therefore, Delos replicates the world of 1973 in fictitious settings. It also lies at the crossroads of robotic/cybernetic technology, computer control systems, transportation networks, managerial hierarchies, and the interaction of the power-elite customers within the Delos world.[25]

Problems arise when the robots begin to malfunction. During a meeting, the chief supervisor (Alan Oppenheimer) suggests, “There is a clear pattern here which suggests an analogy to an infectious disease process.” He confronts objections from the others by saying, “We aren’t dealing with ordinary machines here…These are highly complicated pieces of equipment…Almost as complicated as living organisms…In some cases they have been designed by other computers.” Complexity, therefore, is the factor that connects machines to humanity. The chief supervisor suggests that animal-like infectious disease behavior is manifesting in the Delos command-and-control structure, as well as in misbehaving androids.

An interesting example of an android not following instructions is when the android playing a servant girl named Daphne (Anne Randall) refuses the “seduction” of a human guest. The chief supervisor orders her taken to central repair and as he walks away he says, “refusing.” He says it as half-question and half-threat. I say this because in the next scene, Daphne is ‘opened-up’ on a table where a cloth is draped over her body and the electronics, located where her womb would be if she were human, are exposed. The technicians surrounding her are all male and she is referred to as a “sex model.” The scene invokes an image of gang rape to enforce her programming to fulfill the pleasures desired by a human (male) guest. One way or another, the human operators in Delos try to make the technology (slave) bend to their will (masters).

The malfunctioning androids of Delos are viewed by the human characters as defective or in need of repair. They do not consider the possibility that the androids are revolting against their place in the Delos-system. If the androids are indeed revolting, then their response is analogous to a labor “sick-out” or “blue flu.” The narrative reaches a crisis when the aberrant behavior does not improve the station of the Delos androids. At that point, the gunslinger (Yul Brynner), with its enhanced sensors, begins to fight back against its human oppressors (the guests and operators of Delos).

The Terminator (1984) represents the culmination of American fears surrounding autonomous technology supplanting humanity. In the film, technology, as embodied in the Terminator cyborg, becomes our double after the American military-industrial complex loses control of its technologically mediated communication-control system known as Skynet. The Terminator was originally released in 1984 while the Cold War was approaching its climax and Ronald Reagan had been reelected President of the United States. Additionally, The Terminator appears during the rise of office computing and robotic manufacturing.

The Terminator (Arnold Schwarzenegger) is a cyborg sent back in time to kill the mother of humanity’s resistance against the machines. Despite the cyborg’s “excess muscularity, [it] disconcertingly blends in with the human: speaks our language, crudely follows our basic customs, acts in roughly effective ways” (Telotte 172). Because the Terminator is able to pass as human, it is a chilling double of humanity. Through the first part of the film the audience does not yet know exactly what lies beneath his skin. We are treated to his superior strength, but only later in the film, after he has sustained damage, do we really begin to understand what lies beneath the surface. The hard metal robot body that is under the soft organic skin is the true nature of the Terminator. Without the skin he looks like the killing machines that greet the audience at the beginning of the movie.

The Terminator is the result of the military-industrial complex losing control of Skynet, a computer network of control and command systems integrated into the implements of American war making. After Skynet becomes self-aware, it views humanity as its only threat.[26] Skynet then acts in its own best interest by appropriating humanities’ weapons of war (i.e., Cold War nuclear weapons) in order to eliminate its creator.

The Terminator uses his appearance as a disguise in order to infiltrate humanity in order to kill from within. This technological killer is the very embodiment of autonomous technology that is created from the systems and networks that come from the remnants of the military-industrial complex when it looses control. Anxiety about this deadly form of autonomous technology comes from the way in which its appearance serves to destabilize what it means to be human by revealing how easy it is for autonomous technology to pass for human.[27]

Conclusion: SF and the Politics of Autonomous Technology

As these film and literary examples reveal, SF (and works of speculative fiction) during the American Cold War are a space where networks between science, technology, and culture are discussed. Within that discussion, anxiety surrounding autonomous technology is represented in the images of nuclear weapons and robots. In particular, there is a deep rooted fear surrounding the image of the robot, which is the most autonomous of these technologies. Additionally, the robot serves as a double for humanity in that the robot is “incapable of surprise, imperturbably of appearance, infinite of adaptability, and lightning of mental grasp” (The Caves of Steel 26-27). Humanity is fearful of robots, and in particular, androids, because they are a perfected copy of humanity.

Cold War American anxiety about autonomous technology is often expressed through stories that depict robots replacing us in the idyllic garden. We fear the consequences of losing control of the very technologies that we embrace. Fear arises when there is a lack of control of the unknown. It is with language that control and understanding can be reasserted. Leo Marx wrote in the 1960s that, “we require new symbols of possibility, and although the creation of those symbols is in some measure the responsibility of artists, it is in greater measure the responsibility of society. The machine’s sudden entrance into the garden presents a problem that ultimately belongs not to art but to politics” (Marx 365).

However, Marx’s claim does not hold true for SF in the here-and-now. I have shown that during the American Cold War, SF authors brought together both art and politics into their works. The reason for this is that the political spaces where the issue of “the machine’s sudden entrance into the garden” would have normally been discussed were closed out. SF is a popular art form that is uniquely situated at the intersection of art, society, and technology. Additionally, SF is an art form where political discussion takes place because it is circulated in culture and it is widely viewed. Therefore, SF authors engage the vocabulary and language embedded in the very technologies that American’s feel anxiety about and in so doing, they elevate SF to both art and political engagement.
Works Cited

Winner, Langdon. Autonomous Technology: Technics-out-of-Control as a Theme in

Political Thought. Cambridge: MIT Press, 1977.

[1] Groves began his military career in the Army Corps of Engineers. He orchestrated the reestablishment of America’s munitions industry and construction of the Pentagon before his assignment to lead the Manhattan Engineering District, or Manhattan Project.

[2] Bruce Robinson and Roland Joffé wrote the screenplay for Fat Man and Little Boy.

[3] I further discuss binary opposites involving technology in the paper that I delivered at Georgia Tech’s Monstrous Bodies Symposium in April 2005, titled, “Monstrous Robots: Dualism in Robots Who Masquerade as Humans.”

[4] Winner defines four types of technology: He defines apparatus as the “class of objects we normally refer to as technological–tools, instruments, appliances, weapons, gadgets” (11). He defines technique as “technical activities–skills, methods, procedures, routines” (12). His definition for organization is “social organization–factories, workshops, bureaucracies, armies, research and development teams” (12). He defines a network as “large scale systems that combine people and apparatus linked across great distances” (12).

[5] “Something must be enslaved in order that something else may win emancipation” (Winner 21).

[6] An example of actor-network theory in practice is illustrated in Latour’s “Give Me a Laboratory and I Will Raise the World.” The paper explores Pasteur’s laboratory and how it is situated in a network between farmers, veterinarians, statisticians, science, and economics.

[7] “Each intention, therefore, contains a concealed ‘unintention,’ which is just as much a part of our calculations as the immediate end in view” (Winner 98). Specific purposes actually lead to many other purposes. This leads to progress. Winner writes, “In effect, we are committed to following a drift–accumulated unanticipated consequences–given the name progress” (Winner 99).

[8] Winner writes, “Here we encounter one of the most persistent problems that appear in reports of autonomous technology: the technological imperative. The basic conception can be stated as follows: technologies are structures whose conditions of operation demand the restructuring of their environments” (100).

[9] There is continued debate about the accepted dates for the beginning and end of the Cold War era. I have chosen to use the dates provided by McMahon. He writes, “The Cold War exerted so profound and so multi-faceted an impact on the structure of international politics and state-to-state relations that it has become customary to label the 1945-1990 period ‘the Cold War era.’ That designation becomes even more fitting when one considers the powerful mark that the Soviet-American struggle for world dominance and ideological supremacy left within many of the world’s nation-states” (McMahon 105).

[10] “One implication of this state of affairs is that discussions of the political implications of advanced technology have a tendency to slide into a polarity of good versus evil…One either hates technology or loves it” (Winner 10).

[11] “For if the Earthly Paradise garden was not a poet’s imitation of nature but, instead, his own independent invention, then it logically followed that human beings could independently realize the pleasant qualities of the Earthly Paradise. By applying the theory of the heterocosm to society in general, the utopian attempted to create an improved human condition that owed nothing to powers outside human reason and will. A man-made system, utopia, appropriated the abundance and social harmony of the garden and replaced Mother Nature as their source. In utopia the lady vanishes: the figure of feminine nature no longer enchants Earthly Paradise” (Ben-Tov 20).

[13] The holodeck was first introduced in the TV series, Star Trek: The Next Generation. It’s purpose is to immerse participants in a fully interactive and apparently solid four-dimensional simulation (space and time). Before the simulation begins, one enters what appears to be a very large room with a high ceiling. The walls are covered with a grid of yellow lines and black squares. The room that contains the holodeck is finite in size, so perspective is simulated along with a shifting floor so that as one walks through the simulation they feel like they are walking, but they are essentially staying in a small space. Feedback and solidity of objects is provided by focused force fields. The holodeck simulation is created through voice controlled programming either before or during the simulation. In the example that I cite, Data creates a woodland setting complete with a running brook. In the simulation, Data climbs up onto a branch where he sits and practices whistling (which he isn’t good at).

[14] The origin of the word “android” extends back to its use in regard to alchemy. Clute writes in the SF Encyclopedia, “The word was initially used of automata, and the form ‘androides’ first appeared in English in 1727 in reference to supposed attempts by the alchemist Albertus Magnus (c1200-1280) to create an artificial man” (34).

[15] The film itself (as an artifact) represents film production technologies, distribution systems, movie and sound projection systems, copyright law, the networks of payment, guilds and unions, etc.

(1) A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

(2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

(3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws (I, Robot 44-45).

[17] Although Elijah comes to terms with Daneel, other characters are driven to destroy humaniform robots. Elijah’s wife is secretly a member of the Medievalists, a group that wants to do away with all robots, including Daneel. Commissioner Enderby, also a Medievalist, murders Dr. Sarton, not because he wants to kill Sarton, but because he mistakes him for Daneel.

[18] The producers of this film were probably eager to employ Jimmy Stewart in this role because of his experience flying bombers such as the B-24 during WWII.

[19] The film could have gone in a different direction with characters named “Bible” and “Hawkes.” However, there does not appear to be any symbolic metaphors at play with these characters other than Hawkes being committed to his role as a ‘Cold Warrior.’

[20] In Strategic Air Command, a ground-based radar operator delivers the chilling line, “We’ve been bombing cities everyday and every night all over the US, only, the people never know it.” He is responding to a question about how practice bomb runs take place even in the rain through the use of radar. The quote points to an underlying fear that the bomb is a threat from within as well as from out.

[21] This supports the then held Western belief that all Communist countries were united in a global front against the Western democracies.

[22] While Forbin is testing out Colossus’s surveillance system, he says, “It is customary in our civilization to change everything that is ‘natural.’”

[23] This thought is connected to General Groves’ speech in Fat Man and Little Boy that I referenced earlier.

[24] Westworld, however, doesn’t explore possibilities outside of a narrative track. Death dealing is handled in duels, barroom brawls, and sword fights. Sex is allowed between men and women with one of the parties being a Delos robot. Reckless killing and same-sex encounters are two examples that were not explored within the film.

[25] The control room, the robot repair room, and the technician’s meeting room each represent a different kind of command-and-control structure–all of which lie under the Delos moniker.

[26] There are similarities between Skynet’s appropriation of American Cold War technologies and Colossus assuming domination over humanity in Colossus: The Forbin Project.

[27] The subsequent films in the series, Terminator 2: Judgement Day and Terminator 3: Rise of the Machines, reveal an on-going conflict between machines and humanity. Interestingly, the Terminator (Arnold Schwarzenegger) reprises his role in the two sequels as a different serial number of the same model of Terminators. In the sequels, the human rebels capture Terminators and reprogram them so that they can be made to help humanity instead of kill it. Though, the Terminator in Terminator 3 admits to being the machine responsible for killing the leader of the human resistance, John Conner. Therefore, in some respects the Terminator is made to redeem itself, but there are newer models of Terminators who carry on the work established in the first film of the series.

Search Dynamic Subspace

Search for:

Who is Dynamic Subspace?

Hello! I'm Jason Ellis and I share my interdisciplinary research and pedagogy on DynamicSubspace.net. It includes posts that explore science, technology, and cultural issues through science fiction and neuroscientific approaches. Also, I write about retrocomputing, LEGO building, and other forms of making.

I am an Assistant Professor of English at the New York City College of Technology, CUNY (City Tech) where I teach college writing, technical communication, and science fiction.

I hold a Ph.D. in English from Kent State University, M.A. in Science Fiction Studies from the University of Liverpool, and B.S. in Science, Technology, and Culture from Georgia Tech.