Granted, droids have always been prominent in the saga; C-3P0, R2-D2 and more recently BB-8 are some of its most recognizable characters. But we’ve only had hints at their circumstances, not a fully developed subplot. The first film shows us that droids aren’t welcome everywhere, and it introduces the concept of a restraining bolt, an external dongle for restricting certain behaviors. More recently, K-2SO of Rogue One reminds us that a droid’s will and a droid’s programming can be at odds. That film also gets a fun gag from playing our tendency to view mass-produced items as interchangeable against the droids’ sense of individual identity.

Now, in Solo, we meet L3-37, a droid companion of Lando Calrissian who gets swept up in Han Solo’s adventures by association. But she’s not just along for the ride; she gets her own story which builds to a robot revolution reminiscent of real civil and workers’ rights movement. In the middle of a heist at a mining outpost, L3-37 removes the restraining bolt on a local droid so it will stop interfering. That droid frees other droids, all of whom begin to subvert the operations of their… employers? masters? owners? Restrainers, certainly. Given that the mine also uses biological slave labor, calling its proprietors ‘masters’ doesn’t seem too strong.

Since the series is all about liberation from tyranny, we naturally cheer for this outbreak of mechanical freedom. Yet that sympathy is at least a little curious, given our fears of a robot uprising among the droids we actually know. At the end of the day, we expect the technology we create to do as it is told. Likewise, even among the heroes of Star Wars, there are complications to the human-droid relationships. There’s no getting around the fact that R2-D2 and C-3P0 are purchased by Luke Skywalker’s Uncle Owen; we are reminded every time C-3P0 says “Master Luke.” Granted, he uses the same tone Alfred Pennyworth reserves for “Master Bruce” Wayne whom few would mistake for a slave master. Still, Bruce Wayne doesn’t own Alfred. So we were probably due for a Star Wars film that acknowledges the complicated assumptions built into our sci-fi storytelling.

It’s natural for our expectations to be shaped by previous experience. Mechanical and electronic devices to date are tools with no personal awareness of their existence or subjective experience and no desires or intentions that need to be considered. All sentient and sapient intelligence we encounter is biological. But we can’t necessarily extrapolate from there to all possible electromechanical entities and all intelligent entities. That limitation of learning crops up frequently; as we’ve discussed before, the learning algorithms we use increasingly to aid decision-making often wind up perpetuating the biases inherent in our prior, unaided decisions. To counter-act that tendency, we are developing new tools to detect when such bias is influencing outcomes. It’s only reasonable to consider whether the stories we tell, which can also influence how we make decisions, are likewise reinforcing unwanted bias.

Which leads me to wonder–what do those restraining bolts do anyway? Why are they even necessary? Droids are programmed; dialogue across all the films makes that perfectly clear. If you need a droid to perform a specific task in a reliable fashion, why program it with enough general intelligence to decide it doesn’t want to do that task? Conversely, if you need droids to perform sufficiently complex and varied tasks such that they need a robust general intelligence, how can you restrict their behavior while still allowing them to do their work? In other words, are the only circumstances in which a restraining bolt seems necessary precisely the same ones where its use would be abhorrent?

To inject a more theological spin, why grant something free will only to take it away again? Or why let it think it has free will when it really doesn’t? I’m not going to pretend that I can resolve all your free will-related questions. But I do think the plight of the droids can help us frame those questions differently, which might help us think about our answers in a new way too. For example, do you think God programmed us? Do you think religion is a form of restraining bolt imposed by God (or humans) to restrict our behavior and thinking away from what he doesn’t want? Or is sin the restraining bolt, preventing us from living freely as God intended? Does that freedom have to be programmed in directly, or does it arise as a consequence of some other aspect of ourselves or the universe in general? I’d love to hear your thoughts!

Andy has worn many hats in his life. He knows this is a dreadfully clichéd notion, but since it is also literally true he uses it anyway. Among his current metaphorical hats: husband of one wife, father of two elementary school students, reader of science fiction and science fact, enthusiast of contemporary symphonic music, and chief science officer. Previous metaphorical hats include: comp bio postdoc, molecular biology grad student, InterVarsity chapter president (that one came with a literal hat), music store clerk, house painter, and mosquito trapper. Among his more unique literal hats: British bobby, captain's hats (of varying levels of authenticity) of several specific vessels, a deerstalker from 221B Baker St, and a railroad engineer's cap. His monthly Science in Review is drawn from his weekly Science Corner posts -- Wednesdays, 8am (Eastern) on the Emerging Scholars Network Blog. His book Faith across the Multiverse is available from Hendrickson.

Share this:

2 Comments

I wish I were a little closer and could sit down for a talk. This is definitely a topic that needs more time than a brief comment, and one that we as Christians need to think and talk seriously about. One of my students this semester is in a research group that is trying to figure out how to teach AI (in particular, self-driving cars) to make ethical decisions – it is a very real and challenging issue.

A chat sounds lovely. By chance, will you be at the upcoming ASA meeting?

You mentioned self-driving cars. The recent revelation that emergency braking was disabled in the fatal collision in Arizona has been on my mind a bit. I can understand how erratic braking–or more precisely, patterns of quick breaking that differ from the patterns of typical human drivers–can pose a safety risk of its own. At the same time, it seems like a situation where the self-driving car was not even given the ability to implement an ethical decision whether or not it could make one.

Do you imagine we might be able to teach ethics to AI directly and allow it to work out the applications? Or do you think we need to foresee specific scenarios, work out the ethical responses, and teach those to AI individually?

Welcome to the official blog of InterVarsity Christian Fellowship's Emerging Scholars Network (ESN)! ESN is a national network which supports those on the academic pathway as they work out how their academic vocation serves God and others. We encourage and equip undergraduates, graduate students, postdocs, and early career faculty as they navigate each stage of their academic vocation and transition to the next step in or beyond the academy.

On campus? Then don't miss the opportunity to become a part of an InterVarsity campus fellowship. For more information on InterVarsity Christian Fellowship’s ministry to graduate students and faculty, click here. To God be the glory!

Editorial Staff

The views expressed on the Emerging Scholars Blog are those of our individual contributors and guest authors, not necessarily of InterVarsity or the Emerging Scholars Network. Learn more about what we believe here.

Have an Idea for Us?

If you desire to write a guest post (e.g., Scholar's Compass), suggest a topic for us to consider, or connect with us so that we can serve you in some manner, please drop us a line.