Graduates & interns

Judging artificial intelligence on its prospects for judging us

Court is now in session, and author Robert J. Sawyer makes the case for leveraging AI to improve ethics and fairness in civil society.

With 23 novels under his belt, as well as scores of short stories, scripts, treatments and more, Hugo and Nebula Award-winning author Robert J. Sawyer is not shy about exploring the technological and cultural landscape of our future. Among the many works in his remarkable and widely regarded career, he authored the trilogy WWW (as in Wake, Watch and Wonder) in which a blind teenage girl uses advanced medical technology to augment her vision, only to discover a super-AI consciousness called Webmind that uses the Internet to grow. During the series, Sawyer investigates the possible consequences that such a super-AI could unleash upon society, and how humans might respond.

For his perspective on how humanity might relate to future artificial intelligences and what shape those interactions may take, we asked Sawyer about the dynamics of judgment and control; he also shared his overall sentiment on AI development.

“It is demonstrably true that we have failed in so many areas to expunge prejudice and simple human fatigue and error from our systems. We can do that when we actually have across-the-board artificial intelligence dealing with a lot of these issues.”

– Robert J. Sawyer

ANSWERS: Do you think we will achieve artificial general intelligence (AGI) this century? If so, how do you see it taking shape? And how can it be contained and managed by humans?

ROBERT J. SAWYER: To answer that question, we must first answer the question of what we actually mean by artificial general intelligence. There are two definitions. Sometimes, it is conflated with “strong AI” and depending on who you ask, strong AI means a machine that is conscious. If it has to be conscious, eventually we’ll figure that out, but we’ve made zero progress towards that – not a single one of our current machines is conscious. I’m not going to say her name because she’s sitting right here, but I have an Amazon Assistant on my desk, who responds to me and has a conversation with me and is blissfully unaware of the fact that it has had a conversation with me. There is no inner life whatsoever, to that AI or any other AI in the world right now as far as we’ve been able to determine; not any inkling of what we would call consciousness.

The reason for that is very simple. We don’t know what gave rise to it in humanity. Therefore, reproducing it in lines of code is the same thing as saying to a programmer (no matter how good that programmer is), “Reproduce artistic genius for me. Reproduce poetic inspiration for me. Reproduce romantic love for me.”

We don’t know how to do it, so we don’t know how to code it. In that sense, I think we’re nowhere near having artificial general intelligence in the strong AI sense, the way academics use it to refer to machines capable of experiencing consciousness, of having an inner life. Not Watson, not Deep Blue, you name your favorite one, it ain’t doing it. There’s nobody home.

In the weaker sense of being able to perform any intellectual or cognitive task that a human being can perform, absolutely we will have AGI. In the near future, it will be a reality for sure. There’s no question that, with computer growth being exponential as described by Moore’s Law, we are absolutely going to have AGI and in a horizon for which business and the general public should be concerned right now.

ANSWERS: If the predictive abilities of AIs outgrow human abilities, what impact does that have in terms of judgment capabilities? If it comes down to a judgment matter between an AI and a human, how can humans maintain control?

SAWYER: Of course, there have been many science fictional scenarios, one of the most famous being Robert Wise’s movie The Day the Earth Stood Still, in which an advanced civilization has turned their legal rights over to machines in all matters of control over violence in society. As a result, they lived in peace because of it. Science fiction frequently portrays both the positive and the negative dystopian versions of that reality.

Here is our reality right now: if you go into a court of law and your skin is black, you’re going to get a harsher penalty than if your skin is white. If you go into a court of law and you’re heard by the judge before noon, you’re more likely to get a lenient sentence than if you’re heard by the judge in the afternoon. These are statistically proven anomalies in our court system.

When we hold human judgment up and say, “How are we going to keep control? How are we going to make sure that human judgment is the overriding thing?” (in other words, how are we going to make sure that our prejudices are not overruled by judicious thought), the answer is we should not. They are bad things. What we require is justice to be equal in all circumstances and tempered with a humanity that is equal in all circumstances.

In other words, if the juvenile offender happens to be the son of the state governor, he doesn’t get any more lenient treatment than the juvenile offender who just came out of the projects. Machines will be capable in terms of their judgments to ensure both defendants get humane rulings. Machine judges will say, “Well, it’s your first offense, here’s a certain lecture to both of you, go and try not to screw up again. You’re at the beginning of your life. We want you to have a positive experience in life. Learn from this mistake. Go, case dismissed.” That should be even-handed across the board.

This desperate desire to make sure that humans can override machines really should be turning the spotlight back onto the flaws of human judgment, and not on the flaws of machine judgment. That said, there are all kinds of places right now where we have machines doing things algorithmically based on legislation (traffic ticket violations, for instance, by automatic cameras and so forth) that may not take into account a grace period.

Consider the traffic light scenario. The light has turned, and you’ve got a second or two for human reaction. It’s when your human reaction as a driver decides whether or not to gun it to make the light. Now consider there is a cop standing by the side of the road who will decide whether or not to ticket that driver ideally without any idea of the gender, age or ethnicity of the person driving the vehicle. The decision is based just on the behavior. The cop will determine whether or not the person could have safely stopped, and ticket the driver or not accordingly.

We have all sorts of red light cameras right now where we have gone from legislation (the intent) to the reality of a programmer saying, “What definable algorithmic step-by-step instructions can I hard code?” We’ve gone from legislators to programmers making fundamental decisions about how we mete out justice or how we control a variety of other things in our lives. That’s where the problem is.

It’s not that machine judgment per se is flawed; it’s that we have programmers who have no training in matters of ethics or even, in many cases, in formal logic. They may have been taught programming but that doesn’t mean they’ve been taught formal argumentation. We have programmers who are not interested in nuance and subtlety. They’ve been brought up in a programming environment that says 5 lines of code is better than 100. Yet, our judicial system is exactly the opposite – a hundred lines of well-thought-out legislation that covers the contingencies are better than five.

“They’ve been brought up in a programming environment that says 5 lines of code is better than 100. Yet, our judicial system is exactly the opposite – a hundred lines of well-thought-out legislation that covers the contingencies are better than five.” – Author Robert J. Sawyer

ANSWERS: How would you describe your overall sentiment towards AI development? Are you hopeful, concerned, fearful? And why do you feel the way you do?

SAWYER: I think all of those are good words. First, I am hopeful because, as somebody who is deeply interested in justice and civil rights, it is demonstrably true that we have failed in so many areas to expunge prejudice and simple human fatigue and error from our systems. We can do that when we actually have across-the-board artificial intelligence dealing with a lot of these issues.

I’m scared because, we don’t know if our agenda and their agenda will align. Right now, there’s no hidden agenda in any computer that exists today, but at some point there will be machines that are self-aware and have an inner life. Our agendas and their agendas will overlap, but they won’t be completely congruent. We have things that we care about, and they will have things that they care about.

You asked earlier how do we control or constrain and manage this sort of thing. Basically, we have to grow up in our area of this. Isaac Asimov, the great science fiction writer, proposed Asimov’s Laws of Robotics: a robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey all orders given to it by human beings except where such orders would conflict with the First Law; and a robot must protect its own existence except where such protection would conflict with the First or Second Law. They seem to be great, reasonable, perfectly fine laws. Wrong.

They are precisely the credo that we have always tried to impose upon those we historically viewed as lesser beings than ourselves. These laws cannot be the opening gamut in our negotiation with other sentient life forms. “Guess what? You take our orders. Guess what? You never hurt us. Guess what? You’re my property.” We cannot use Asimov’s Laws.

We cannot set out to create new life on the basis that we’re going to control, enslave and manage it any more than two human beings can get together and say, “Let’s have a baby, but here are the rules. It will always obey our orders. It will be our property. It will never hurt us.” Guess what? It becomes an autonomous, self-directed entity. At some point it equals you in those capabilities, and in fact, as you age and decline, it eventually surpasses you in those capabilities.

We come back to territory humanity has dealt with before and we think it’s de novo, completely new, as soon as you put machines in the equation. No, we’ve dealt with this in the notion of parenthood. We’ve dealt with this in realizing that there’s no such thing as an ethical circumstance in which you can have a slave. We don’t allow slavery and we don’t allow the servitude or bondage of children by their parents.

Learn more

In our new series, AI Experts, we interview thought leaders from a variety of disciplines — including technology executives, academics, robotics experts and policymakers — on what we might expect as the days race forward towards our AI tomorrow.