Grr, argh, and all that.

Yeah, you read the title correctly. This is something that wasn’t actually inspired by Wall-E, merely brought once again to the fore of my brain. The real inspiration (besides the fact that it’s something I’ve been pondering for a while generally speaking) came from watching Star Wars III recently. In Revenge of the Sith, C-3PO and R2-D2 feature less prominently than in the original trilogy, and their presence often feels a bit shoehorned into the narration, but they’re there, the two lovable/irritating bolt buckets. No one who’s seen any of the Star Wars movies can deny that these two have a personality. They learn. They have a sense of self-preservation. In short, they’re sentient. Proof enough come from the fact that their human companions treat them like… well, other human companions, not just tools.

And yet, at the very end of RotS, after Senator Organa gives the two droids to Captain Antilles, the CO of the Corvette they are on, the captain orders C-3PO’s memory purged without batting an eyelash. o_O

So… That’s the mighty Republic’s take on artificial sentience? Biological organisms are free to erase a droid’s memory like that, effectively destroying its personality? We are the sum of our experiences. Take two identical twins, and the older they get, the more different they become, because they won’t have the same life, and chance andÂ circumstancesÂ will not mold them the same way. Similarly, if those droids can learn and evolve beyond their programming, then they too are the sum of their experiences, and purging their memory is in effect the equivalent of, at best, a brainwashing, and at worst an execution. The body might still be functional and have basic skills and personality modes built in, but once it’s rebooted, it won’t be the same droid anymore.

You might argue that what we see is only the result of the droids’ original programming, and that they cannot actually learn, thus remaining firmly into the non-sapient camp. And it’s true that people don’t seem exactly heart-broken when astromech droids are dropping like flies during the escape from Naboo in Episode I. But if that’s the case, why bother giving the survivor – R2 – a commendation? You don’t give a toaster a medal for working according to its parameters, that’s ridiculous!

The whole take on droids in Star Wars is quite ambiguous, and while in the original trilogy it could always be explained away by saying: “It’s the Empire – they’re eeeeevil!”, in the I-II-III trilogy we’re supposed to be seeing the Republic in all its fading glory – the shining, if possibly a bit tarnished, beacon of freedom, democracy and tolerance at the heart of the galaxy. If that is truly the way they treat droids, sentient artificial creatures, one cannot help but think that maybe the Sith have a bit of a point when they go on about rotten fruits and all that. Apparently the Jedi are all about tolerance and respect, but it stops where the chrome begins…

Â

Anyway. Beyond the extra-nerdy helping of Star Wars lore (I’m not actually that much if a Star Wars fan; I meanÂ I love the movies, but I’m much less into the universe than I would be into Lord of the Rings or even Star Trek – quite possibly because they’reÂ a lot better structured…), the question does stand on its own. While we’re not yet at the point where we actually have to decide on whether it is acceptable to create A.I., and how we must treat them, we’re approaching that, and we certainly think about it in cinema and litterature. A movie like I, Robot puts it firmly at the center of the story for example. Hell, even 2001 A Space Odyssey brushed on it. When HAL goes mad, Dave Bowman doesn’t have the problem of wondering if turning it off is good or bad – it’s a survival thing. But Kubrick leaves absolutely no doubt in anyone’s mind with the disconnection scene: we’re not seeing a human being flip a switch on a piece of malfunctionning equipment, we’re seeingÂ the execution – you could almost say murder – of a sentient entity. Toasters don’t get afraid.

But most movies that deal peripherally with robots seem to adopt a very Star Warsy approach to dealing with sentient robots. That makes them (the robots) interesting, especially narratively (ooh, the plot devices they generate!), that can make them cute, or nefarious; but it rarely makes them the exact equivalent of humans. Partly because of narrative lazyness, as such a state of affair can necessitate explanations or exposition. But mostly, I think, because deep down we’re afraid to face that prospect. Cute machines are okay – as long as they’re still machines. For every R2, there are millions of soldier droids or blaster-fodder astromechs that respectfully toe the line and don’t inconvenience their human masters. R2s are pets, bright pets. Lassies. We can tolerate a few, and we feel good about ourselves when we treat them well, but that’s it. And the annoying ones, pet or no, gets memory-wiped when it’s convenient. Even in Wall-E. After all the effort to show robots with personality, capable of outgrowing their programming, of experiencing emotions (especially love), there’s a very short bit where two humans splash water on a robotic pool attendant and short-circuit it – and that’s perfectly fine, because no matter how much of a personality it has, it’s still just a machine, AND it’s humans doing it, so it’s more important! Funny how even in such a wonderful movie you still get that little contemptuous flick of the hand.

Â

We will have AIs. Someday, we will have sentient machines. They’ll probably be born by accident, to be honest – spontaneous generation among the information torrents of Internet, a science experiment gone astray; a military control system named SKYNET, maybe. Who knows.Â But unless we manage to kill ourselves beforeÂ we reach that technology level, it’s pretty clear that we will reach it, and then will come the time to face the fact that sentience can take different shapes than humanoid.

You don’t think so? Look at apes and dolphins. Their language is rudimentary at best,Â they don’t really make or use tools that much (especially theÂ dolphins)… Yet people are debating whether they can be considered sentient or not. Now try to imagine the possibilities offered to a machine that’d have the equivalent intelligence of a dolphin or ape, but the capacity to process calculation at even PC speeds, electronically controlled appendages, access to databases through the internet… If that particular machine displays the slightest hint ofÂ sentience – self-awareness, self-preservation, reproductive instinct – are we going to be able to gloss over it? If we’re already wondering about Flipper, how are we going to ignore HAL?Â

And yet we are very likely to ignore it, to do, as a species, the social equivalent of putting our hands over our ears and going “Lalalalalalala!” Machines are a lot better than us at a lot of things. If they start having feelings and learning for themselves and creating or understanding beauty, making up jokes, coveting their neighbours wife and falling in love… What the hell do we have left?

Of course, it’s entirely possible that we’d be much better at those things that they would. Maybe even a sentient robot wouldn’t be able to make art, or know humour. It’s hard to tell. Defining “sentient” is complicated. But if WE create those machines, chances are they’ll have some of our characteristics. And however long it might take, they’ll probably develop some of our skills too.

And we fear that, deep down. It’d be a loss, a theft of identity. We are who we are. If someone else comes along (robots we create, or aliens from outer space, for that matter) who can do the same things, the identity crisis that will hit the human race will be of epic proportions. Expect riots and waves of suicides and fundamentalistic religions and pogroms and finger-pointing and all sorts of lovely proofs of our better nature. The first thing of course will be the suppression of the offending people, or their subjugation. And an enslaved, sentient population is always going to be a source of serious problems – beyond the obvious risk of revolt (Galactica, anyone?), the impact on the human psyche will be enormous. You don’t enslave people without consequences, not when you have reached a level of social acumen that tells you slavery is wrong…

Where am I going with that? Not too sure. Just one of those ideas that occasionally pop up in my head and get me wondering about deeper things than where the next meal is coming from. Generally speaking, I have very little faith in Humanity’s ability to do the right thing right away – I tend to believe we’ll screw up first out of anger or fear, then figure out it was wrong and do something better. That undercurrent of contempt for sentient robots that I saw in Star Wars, smelled faintly in Wall-E, an have been detecting in many ways in mainstream culture, kinda woke upÂ that general distrust of our species’ moral sense.

3 Responses to “Wall-E part 3: of the rights of sentient robots.”

We don’t build robots to be sentient, we build them to perform tasks we cannot. Essentially we’re building a slave race, if they achieve sentience we’ll probably destroy it. What good is a slave when it won’t walk into a burning building to save the humans, it doesn’t want to get damaged.

With sentience achieved I see being it being similar to the Animatrix, the sentient robots wanting to live in co-existence with us, they begin to overshadow us with their achievements, we go to war. Unfortunately our tendencies are to kill what we fear, and if you fear being a slave to a machine that you created, well what other option is left? Destroy it all. Make them toasters again.