obligatory insight

Ray Kurzweil’s “Transcendent Man” predicts a future of exponentially accelerating technological advancement. Eventually, the film proposes, computers will be so advanced that they will be able to rival the human brain, eventually surpassing it, resulting in a world so affected by rapid technological advancement that it’s incomprehensible to modern humans.

Stephen Spielberg’s “AI” experiments with a similar future. Melting polar ice caps have forced humanity to adopt robotic children as surrogate objects of affection rather than overpopulate the stark remaining land. David, one such robot, is the first of his kind to “imprint” upon a human being, simulating childlike love with uncanny accuracy. At the film’s end, the world has frozen over. Highly advanced technological life-forms are the sole remaining progeny of humanity. They dig up David to probe his brain for memories about humanity.

Since Spielberg seems to generally accept Kurzweil’s theory of advancement, it’s crucial that we examine the congruity of the ideology with the film’s mythology. The text and its proposed reality suffer from a few inconsistencies:

1. Reconciling the “icepocalypse” with the advanced state of technology required to create a human brain-like computer.
– – If humanity is capable of recreating its own brain, complete with childlike emotional attachment and the ability to chase a dream, how is it not capable of computer models accurate enough to develop a plan of action that wouldn’t kill themselves off? Even today, we can predict certain weather phenomena with surprising accuracy (at least in the short, days-long term) and plan accordingly. There’s simply no way humanity would have died off in an ice age. I consider my suspension of disbelief thoroughly violated.

2. The techno-life isn’t capable of scanning David’s brain to extract his memories.
– – Instead, they resort to simulating David’s “natural environment” and try to glean information about humanity from observing David’s reactions to his resurrected mother (I’m not even going to dignify the “science” behind the res. with a deconstruction) in a sort of secondary-source mode of scientific observation. Not exactly the most reliable of methods, particularly not for techno-beings who are millions of times smarter than any human being.

For someone feeling so uncertain, Ted expresses himself in no uncertain terms. Existence befuddles him – it feels fake, contrived. After experiencing the game-world, Ted’s stumbled into a prison of his own making: he can no longer distinguish the real from the virtual. The consequences of his decision to enter the game frighten him, as the action has permanently tainted his reality.

This is where Franklin’s quote comes in. Initially, Ted was unwilling to sacrifice his security (contentment with standard-issue “real” life, evidenced by his lack of bio-port) for liberty (the “enlightenment” of the game experience). When he finally abandons his security at the Country Gas Station, he fails to realize the true consequences of his actions, as he doesn’t enter the game until the Ski Resort. There, he sheds his security in kind. But since he was forced into it, rather than complying of his own free will, he’s unable to handle true liberty.

2. Cares: Allegra Doesn’t Give One

Allegra slays Dr. Whatsisface in cold blood, because she apparently didn’t approve of his character. She demonstrates a stark lack of reservation with regard to human life, a sort of terrible liberty, lent to her by the freedom of the game. She regards others as means to her entertainment. Having cast away the “constraints” of society’s reservations towards killing others, she now enjoys freedom from responsibility toward other humans. She has given herself completely to the game; it swallows all of her inhibitions with its narcotic gratification.

Having forgone the security of human rights within society to the game’s no-rules sandbox, Allegra enjoys absolute freedom from that same security. The freedom itself may not be the best kind… but it sure as heck reverberates into real life: the “actual” Allegra showcases a similar disregard for human life, murdering the designer of tranCendenZ for his slaughter of reality.

Moral Profiles of the Characters in Gattlebar Stalactica

(Apologies for the mangled subtitle, but “Stalactica” is too awesome not to be in it.) “How do extreme situations/crises affect the morals of the characters? What do their reactions reveal about mankind in this series?”

When the BSG characters are faced with an extreme situation, they revert to their most basic conception of the role of the individual in a society. Examples:

Adama: Faced with his ship’s malfunction, he tells his girlfriend Thrace to leave him behind and get herself to the evacuating BSG. He exhibits a selflessness indicative of his understanding that (he thought) one person saved is better than none. This utilitarian mindset is strongly resonant with his deepset military history. Greatest good for the greatest number.

Thrace: Faced with Adama’s ship crisis, she maneuvers her ship around to his and saves them both with deft ease. She believes, well, that Adama’s a sexy hunk with a mind to match. She may or may not have saved anyone else in the same situation – we can’t know for sure yet. We can narrow her possibilities to two mindsets: Individuals are always valuable / People you know are valuable.

Commander: Faced with the fire situation, he chose to kill ~80 men to save the ship from potential immolation. He compared the numbers, and the bigger number won. Utilitarianism, just like Adama’s mindset. Greatest good for the greatest number.

Col. Tigh: Just like Commander Adama, he makes the utilitarian decision in the fire crisis. Greatest good for the greatest number. (Also, dammit, Wikipedia, when I look up a character’s name, I don’t want to see a series-ending spoiler at the top of the character’s profile!)

Gaius: Do we really need to waste word on this douche? Selfish prick is concerned only with own fate after damning mankind. Standard-issue douchenozzlery.

President Roslin: Faced with the death of 42 cabinet members, she realized she’s now the President of the 13 colonies. She accepts her obligation and begins doling out orders by the ladleful. This makes sense because she’s a teacher, and is naturally group-minded re: needs and teamwork. “… some have greatness thrust upon them.” (fulfill your duties within society’s role for you)

Anyway, that was fun. It’s cool to see how the characters’ lives affect their ideas of individuality and their responses to crises.

“Emotion as the true measure of authentic personhood”

Title taken from Elaine Graham’s Representations of the Post/Human, p140, paragraph 2, line 1.

The absolute measure of humanity is a question that the students in Writing through Media are, by now, very familiar with. Luckily for us, Data the sentient android wrestles with this issue every waking moment of his life, and provides us with a wealth of characterizations and scenarios to analyze. Data, being the only fully sentient android in the known universe, is the object of much scientific scrutiny, as well as the subject of much lamentation and struggle re his status as “non-human”. Data strives to be quintessentially human, but is shown throughout the series to fervently do battle with this goal. As Graham writes, “Human emotion is represented as a key measure of human distinctiveness – but a source of mystification, even danger, for Data.” (140) Sources of this danger include Lore, an android similar to Data but capable of human emotion.

But why is emotion so fundamental to our definition of what’s human? The obvious answer is that humans themselves usually experience it – whatever “it” is – and we can’t identify with a lifeform that doesn’t. Ironically, this exhibits a lack of the emotion of empathy on our part. A less obvious answer, though, is that we are completely incapable of understanding what a non-emotional, or a completely rational, sentient being thinks like. We cannot imagine the thought processes of such a mind, for multiple reasons:

1. “Reason” is itself a human invention. Or a discovery, depending on who you ask. The point is that it’s entirely possible that a being which neither uses emotion nor reason to determine its actions exists.

2. A completely rational sentient being has never been observed. We don’t know what one acts like. The character of Data is a nice attempt to create one such, but that’s what he ultimately is – speculation.

3. Finally, we don’t know if emotion is a byproduct of mere sentience, or a leftover of biological evolution. If the former’s true, then Data should be considered human. If the latter is, then Data, while obviously conscious, is unable to feel the “genuine” (biochemically influenced) emotion that humans experience.

It looks like “human” is itself a fundamentally flawed definition to use when examining the status of sentient beings. “Human” is, by default, entirely inclusive to members of Homo sapiens, whereas a word like “sentient” is much more forgiving with regard to origin, emotional capability, and so on. We shouldn’t be asking whether Data is human, because the answer is obviously no. But even though Data is himself a creation of humanity – both in the series and as a TV character – we can agree that, within those restrictive bounds, he’s at least as much of a thinker as we are, and deserving of as much respect.

Amy’s Humanistic Confrontations

Amy’s a classic Scotswoman. Stubborn, convicted, trusting, opinionated. She sticks to her beliefs on life, morality, friendship, and so on. She’s even getting married, which is the ultimate acceptance of traditional values.

However, she’s faced with some extremely challenging decisions in season six of Doctor Who, particularly in the season 6 episode “The Rebel Flesh” / “The Almost People”, wherein a special liquid is capable of forming an exact replica of another molecular structure. The people-copies this substance forms are a source of much tension over the course of the episode, particularly the Flesh-copy of the Doctor, which Amy insists is “just not the same” as the “real” Doctor. She feels, perhaps rightly, that the new Doctor is an imitation of the one she knows, and the ganger’s memories of her are meaningless.

Amy feels this way because she believes the copies of the Doctor can be distinguished from one another, which is true in a certain sense. The Doctors do wear different shoes. Prior to his formation, the Flesh Doctor was nothing but a bunch of haphazard liquid. After copying the Doctor, though, the Ganger Doctor has become an exact replica of the original Doctor, down to the molecular level – meaning that, after a certain point in time, the two can no longer be differentiated. This point’s driven home quite hard when the Doctors reveal that they switched shoes earlier on, in order to experiment on Amy and observe the extent of the Gangers’ similarity to living beings.

Once this revelation is made, Amy is understandably shocked – the Doctors knew that she would stubbornly refuse to accept the copy as an exact replica of the Doctor, and deliberately tricked her. However, what’s even more shocking is that she was fooled in the first place. If there was ever any doubt that the Gangers were even slightly different from their flesh-and-blood counterparts, Amy’s refusal to accept the “Ganger” Doctor (who was actually real) as being the man who the Ganger “Doctor” claimed to be (and arguably, was) obliterated it.

At the end of the episode, we feel sympathy for Amy – she’s been scared and shocked and frightened. However, we also feel that we’ve learned a lesson. Amy’s taught us, through the Ganger Doctor, that these replicas are exactly as human as they believe themselves to be – making the deaths of the episode, and particularly of the Ganger Doctor, all the more sobering.

Fatalism in Terminator 1 & 2

The struggle to change the future in Terminator1 & 2 is resonant with humanity’s desire to control its destiny to die. Delaying the inevitable is analogous to humans’ belief that death can be prevented, and the Skynet apocalypse is treated as such a deadly event. Sarah Connor and Young John have differing approaches to the apocalypse, which mirror their maturity and views of death.

Most prominently, Sarah Connor is in the original movie an innocent, spunky character. She’s dragged into a battle for the future like it’s a trip to the grocery store. Once again, this is an example of Sarah’s loss of innocence in the series – she is, after this event, forever corrupted by her knowledge of the dystopian future of Skynet. In Terminator 2, this knowledge drives her. Indeed, it is arguably her only motivator: even her love for John is an extension of her desire to save the human race in 2029. John is, to her, an object to be protected. Her transition to adulthood made her conscious of the fact that she will one day die, as is the realization of many adults in middle age; also like them, she has become obsessed with stopping death – or, Skynet.

Young John plays his part beautifully as well. Ever the optimist, he exudes all the idealism of a small child newly introduced to the world of men. He comes up with simple black and white solutions to grey problems – “You don’t ever kill people. It’s wrong.” – with no thought to when the rules should be broken. This thoughtlessness is mirrored in his approach to the Skynet apocalypse. John’s actions, while they help prevent Skynet, don’t treat the apocalypse as inevitable – he approaches his goals as naive and hopeful, in contrast to Sarah’s weaponized, anarchist, destructive approach. Just as children understand that death occurs but fail to think about it, so does John work against Skynet but refuse to accept its eventual reality.

What significance does memory hold in the film?

Memory serves at least three major functions in Blade Runner.

precursor to consciousness

ethical standard

indicator of humanity

First and foremost, memory in Blade Runner is treated as a possession without which it is impossible to be considered sentient. Humans are of course considered conscious by default. Replicants, on the other hand, are limited to a lifespan of slavery in four years, and are only allowed to collect a toddler’s amount of experience before they die. Thus, replicants are broadly considered to be inferior to human beings. They aren’t treated as second-class citizens – they are treated as animals. This abuse is partly due to their laboratory origins. When Deckard meets Rachael – a replicant supplied with all the memories of a human being – later in the film, her behavior is subtly different from the expected human behavior, in spite of the memories in her head. This suggests a fundamental difference between humans and replicants.

An ethical standard is the second role that memory plays. A central plot device in the film is Deckard’s quest to (re)gain his humanity. He gradually does so as the film progresses, thanks in no small part to the various ethical dilemmas that he faces. (Should replicants be punished for wanting longer life? What happens if I mistakenly kill a human being? Can/should I love/lust after a replicant? etc.) Deckard evaluates these questions, it would seem, by examining both himself and his experiences to arrive at a conclusion. “Experience: replicants have killed people. Therefore they should be punished. Ex: I’ve never been wrong about a replicant before. Therefore there is no risk of a mistake. Ex: This replicant has memories and wants to have feelings. Therefore I can teach her to love.” Deckard’s experiences provide a standard against which he can weigh his actions. The same is true for Roy – in his short time, he’s seen only abuse at the hands of human beings, so he feels justified in killing the source of his suffering.

However, the most important thing the memory does for Blade Runner is to provide a criteria for humanity. Obviously, every human in the movie has a lifetime’s worth of memories stored up, and we take their status as human for granted. The audience is less sure about replicants, although it’s clear upon further examination that one of the movie’s goals is to convince the audience that the replicants are human. Look no further than Deckard himself for this information. He looks up information from Rachael’s past in a “file”, which allegedly contains information about the memories of all replicants. Earlier in the film, Deckard is seen daydreaming about unicorns. At the very end, he encounters an origami unicorn left in his path by Detective Gaff, the man responsible for looking after him. As this unicorn was placed in a location that Gaff would not normally have known, the strong implication is that Gaff looked up “unicorns” as being personally significant in Deckard’s file – which implies that Deckard is in fact a replicant, imbued as Rachael was with human memory. This revelation also has the impact of changing Deckard’s quest from “regaining his humanity” to “gaining it in the first place”, and explains how with time, Deckard becomes more human as he accumulates real-world experiences and real-world memory. As Deckard is arguably the most human character in the film, memory is clearly the film’s absolute indicator of humanity – it’s what Deckard quests for all along.