This is in striking contrast to Tony Veale's presentation the AAAI Spring Symposium on Ethical and Moral Considerations in Non-Human Agents, where he insisted that anyone who thought he was responsible for his twitter bots didn't understand agency. As my own presentation (which sadly Veale did not attend the meeting, he only skyped in for his own talk) explained, moral agency is a strict and minuscule subset of agency. As I've argued frequently in this blog (most recently in a post on the UK's Principles of Robotics), it's not that we can't conceive of a legal or moral system where AI is a responsible agent, it's that keeping AI as a human or corporate responsibility is the best option both from the perspective of human society, and of any potential (so far unbuilt) AI that might suffer due to its unequal relationship with its creators. We're obliged to make AI we are not obliged to, and while individuals may violate that obligation, governments and societies can refuse to condone that, so that such systems would never be legal products.

What's made me blog is a tweet by an AI colleague I greatly respect, Alex J. Champandard, founder of http://aigamedev.com/. Alex said:

This is interesting, because I believe Alex is confusing an authored artefact (the bot) with a tool for authoring artefacts (Photoshop.) And I can see why this confusion might be made, since one way to phrase my own claim in the second paragraph above is that AI should be seen as a tool. But I don't think "tool" is the right metaphor for AI artefacts. AI itself is a tool, but we use it to create intelligent prosthetics which proactively pursue goals we have determined for them (either directly, or by determining how they will determine their goals).

An AI artefact is indeed an agent; it changes the world. Intelligence transforms perception into action; that's how it's defined, and agency is the capacity to change an environment. Moral agency is being responsible for that transformation. Chemical agents are never morally responsible, 2-year-old children are only responsible in limited circumstances. I firmly recommend that AI bots should also never be responsible, though ensuring this requires an effort of policy. It also requires powerful companies and individual trend setters to show leadership.

Although we had prepared for many types of abuses of the system, we had
made a critical oversight for this specific attack. As a result, Tay
tweeted wildly inappropriate and reprehensible words and images. We take
full responsibility for not seeing this possibility ahead of time. We
will take this lesson forward as well as those from our experiences in
China, Japan and the U.S. Right now, we are hard at work addressing the
specific vulnerability that was exposed by the attack on Tay.

And I must respectfully but strongly disagree with my friend Alex.

It doesn't limit creativity meaningfully to take responsibility for your creations.

Doing so with twitter bots is within the established state of the art.

Artefacts we create with AI embedded are not the same as products we sell to allow others to make such creations. Had Tay been a bot-creation tool and 4% of people created abusive bots, then we might condemn those 4% of people, but still talk about regulating the tool. But Tay was a single bot many people were encouraged to interact with, and as such strictly Microsoft's creation and responsibility.

Popular Posts

Since our Science paper came out it's been evident that people are surprised that machines can be biased. They assume machines are necessarily neutral and objective, which is in some sense true -- in the sense that there is no machine perspective or ethics. But to the extent an artefact is an element of our culture, it will always reflect bias.

I think the problem is that people mistake computation for math. Math really is pure, has certain truth, it's eternal, it would be the same without any particular sentient species looking at it. That's because math is an abstraction that doesn't exist in the real world. Computation is a physical process. It takes time, energy, and space. Therefore it is resource constrained. This is true whether you are talking about natural or artificial intelligence. From a computational perspective there's little difference between these.

People are smart because we are able to exploit the selected "best of" other people…

The good news: We know where word meanings come from
We have a paper in Science,Semantics derived automatically from language corpora contain human biases (a green open access version is hosted at Bath). What this paper shows is that you can find the implicit biases humans have just by learning semantics from our language. We showed this by using machine learning of semantics from the language on the Web, and comparing that to implicit biases psychologists have documented using the Implicit Association Test (IAT). The IAT uses reaction times to show that people find it easier to associate some things than others. For example, it's easier to associate flowers with pleasant terms and bugs with unpleasant terms than the other way around. Notice that the actual statistics underlying the IAT is always about these slightly complicated, dual relative measures. It's easier to: group {flowers and pleasant terms} together, AND {unpleasant terms and insect names} (both those groupin…

When Demis Hassabis said he would join Google if they didn't work with the US military, I told BBC Newsnight that this was a red herring. "Murder kills five times more people than war" was a short way to say there's a lot more to ethics than just avoiding the military, an obvious example being avoiding selling arms to paramilitaries (or school children.) In fact, many military officers often really are major advocates for peace and stability, including policies like reducing developing-world diseases and poverty because they contribute to instability.

But by far the worst and most disturbing thing I heard in the many AI policy meetings I was invited to attend last year was in the only one in the US. That was the Artificial Intelligence and Global Security Summit on Nov 1, 2017 in Washington, DC, hosted by the Center for a New American Security. Note: the CNAS have videos and transcripts of all the talks and discussions linked on that page.