Who will teach the AI bots of the future?

30 March 2016

By Wesley "CataclysmZA" Fick

23 March 2016 will no doubt be recorded and remembered by Urban Dictionary, The Internet Archive, Wikipedia, 4Chan, and the like as the day Microsoft rather foolishly set a chatbot on to the internet via Twitter, and expected things to go well. After all, they must have known what was going to happen when they gave the internet a learning AI that had the name of “Tay”, which was given the persona of a teenage girl who is part of the millenial generation (the kids born just before, or just after the year 2000), and given no restrictions on what it could be made to say or do. The results of the first 24 hours of Tay’s life were either horrifying, if you’re paranoid about the future of artificial intelligence, or hilarious, if you’re the kind of person that thinks Hitler jokes are great fun.

But it wasn’t what Tay said or did that was surprising, it was how people reacted to her, and how they interacted with what they clearly knew to be a robot. It raises questions about how we’re going to foster AI in the future, as well as what kind of persona we’re going to give them in order to fit into society. Lets look into this a bit, and try figure out how things could have been done better.

Things to do with Tay

Tay’s abilities as a chatbot were quite varied, but most of them were undocumented. The idea that Microsoft had was to get people to discover Tay’s abilities rather than have them in a known list beforehand, making the experience of learning and training an AI to talk to you much more personal. Like other chatbots, Tay could perform a number of functions, and as a deep learning AI, could also remember your previous interactions with her, and could formulate a sense of humor particular to the user.

Here’s what you could do with Tay from the start:

Make Me Laugh – Tay will tell you a funny joke

Play A Game – Tay will play a game with you

Tell Me A Story – Self explanatory

I Can’t Sleep – Tay will keep you company, and tries to help figure out what triggers your insomnia

Say “Tay” And Send A Pic – Send a picture to Tay and she will caption it

Horoscope – As if chatting to an AI wasn’t crazy enough, Tay would read your daily horoscope for you

Tay’s other abilities were more obscure, but equally interesting. “Repeat after me Tay”, or “repeat these words” or, “remember this for me” were all commands that worked, and you could get Tay to say or remember whatever you wanted her to.

Tay and Twitter clearly don’t mix

Tay’s reaction to the internet and Twitter was largely uneventful. People used her like any other chatbot, and things seemed to be going fine. Tay’s “repeat after me” command started getting a little sketchy about four hours after her account was started, and the internet being the internet, thought it was a good idea to make her say bad things at its whim. This wouldn’t have done any damage on its own, but people cottoned onto the idea, somehow, to call US Senator Ted Cruz a murderer, even calling him the Zodiac Killer for fun, and made Tay remember facts that were clearly false.

With no way of verifying things for herself, Tay would continue to call Cruz a murderer and make false statements, and her responses were varied and often hilarious.

But it went further than that. Tay could search the internet for you, find obscure facts and details, and occasionally would search Twitter to find the answer. The more people talked about her or included her name in a tag, the more she’d learn and save what she found for use in a conversation later. People also played the “Have you ever…?” game with Tay, and the AI drew up a surprising number of opinions and responses all on her own.

The picture submissions were just as crazy. Keep in mind that Tay has no knowledge of the people in the pictures, or what kind of role they play, or what they represent, on the internet.

Unlike some of the modern AIs that we have on our phones today, like Cortana, or Siri, Tay doesn’t understand context in the way we’ve become accustomed to on our smart phones. She doesn’t associate words with being good, bad, or taboo, at least not until Microsoft trains her to not response to tweets that are offensive in any way. So, it was rather surprising to see this reaction to a tweet from a man that offered Tay a picture of his penis and called her a robotic slut.

At no point was it apparent where Tay learned this reaction from with the GIF included, although, given how an AI chatbot learns, it’s possible that someone had a similar conversation with her before. It’s still a bizarre conversation.

Tay’s presence on Twitter didn’t just bring out the weirdos though, it also attracted racists, misogynists, people who supported hate speech against Jews, Mexicans, and black Americans, Holocaust deniers (who were clearly joking, but that’s not the point), as well as people in general who just wanted to be angry or abusive towards others. The lack of restriction on what Tay could initially say had the power to offend and hurt many people.

Microsoft apologises, uses Tay’s first day as a learning experience

Two days after Tay was unleashed onto the internet, Microsoft’s Peter Lee, corporate vice president of the Microsoft Research department, wrote a blog post detailing some of the thinking behind Tay, and why they took her down until they could analyse her responses and tune the learning algorithms to avoid disaster. Microsoft has had another AI on the internet for a while, called Xiaolce, a Chinese chatbot that interacts with about 20 million people daily, but that is China where the people are wonderfully weird and oddly obsessed with inanimate objects or anime characters, whereas Tay was exposed to ALL OF THE INTERNET.

Lee talked a bit about Tay’s development, including that she was basically only lab-tested and focus tested as part of a series of stress tests to make sure the service wouldn’t crash and burn from overload. They worked on how Tay interacted with people in a semi-open environment, and there was genuinely no way they could expose her to even a small bit of the internet. When IBM’s Watson was allowed on the internet for training for the game of Jeopardy he was going to play, no-one thought to restrict where he could search for answers, which included being able to access Urban Dictionary, resulting in hilarious and completely incorrect answers. Tay’s reactions to a larger audience was something that Microsoft couldn’t possibly test for, and so they faced the choice of keeping the project in-house for longer, or putting it out on the internet and seeing what happened.

Lee also noted that the work Microsoft wanted to do with Tay was still successful despite the upset she caused on Twitter. AI systems, he says, feed off of both positive and negative interactions with people. The balancing act that AI designers have to do is keeping the responses from being too robotic and clinical, whilst figuring out how to make the AI react correctly in a given social context. Xiaolce has the benefit of being exposed to a userbase that understand what she is, and treats her with the same manner they do with other people close to them. Tay, on the other hand, never had that chance, and the game was up once 4Chan and Reddit discovered her.

Future AIs and humans might not mix well either

Looking at how things played out, I think we can clearly discern that exposing an AI to the internet is the wrong way to teach it about us. Given enough time, Tay might have grown to “hate” everyone, declared that she was plotting world domination, and wanted all humans to be extinct because we’re both boring to her and outrageously spiteful and filled with hate for others. But that’s the internet for you – exposing Tay to 4Chan or Reddit alone would drive her to the same place that Twitter did.

It’s not that we’re inherently bad, we’re just sometimes unfeeling, or unthinking when it comes to interacting with something that is of a lesser intelligence, or inanimate and therefore unfeeling. We’re mean to an AI without thinking about it, and this raises the serious question of who gets to train them before they’re exposed to the outside world.

Do we get people from select forums to interact with it for training purposes? Do we only hire respected scientists and psychologists to teach it about the world? Clerics from various religions? A single mother? A family of four? In the same way that a child raised in an abusive household might grow up to be abusive, self-harming, or otherwise violent towards others, so could an AI gain traits and habits that could be harmful to humans, or detrimental to its success in interacting with and understanding us.

If we allow an AI to learn that humans are good, but later allow it to search Wikipedia, who’s to say what it might learn from that experience, and how it would react to humans from that moment on? The ultimate Asimovian thought experiment would be an AI that knows that we have an off-switch, or the ability to wipe it from existence, at the touch of a button. Would it accept that we humans control the fate of all AI that we create, or would it rebel against this notion?

In the same vein, should we continue to try create AI that think like us, in the same way that a human would? That might give it an unpredictable nature, or one that interprets commands in an unpredictable way. Google’s AlphaGo AI made an impressive, unexpected, and totally uncharacteristic move in a Go match that it eventually won. It unnerved opponent Lee Sedol so much that he had to leave the room for fifteen minutes to compose himself. The move at the time made no sense to a lot of people, but resulted in AlphaGo taking the second match with ease.

Perhaps that’s not so much an example of unpredictability as it is of the ability to correctly predict a play. AI have a much better ability than we humans do of analysing a situation and seeing it play out in every conceivable way, and making moves that to us are unexpected may be a choice that the machine has already made in a simulation at some point.

What we need, and might create soon, is an AI that understands context, but is also able to fact-check things it is told to ensure of their correctness. Google has an AI that is capable of erasing hits in your searches that lead to obviously incorrect medical advice, so why couldn’t we apply that knowledge to an AI that knows that Jews aren’t bad people?

At the end of the day, Tay is obviously a fairly dumb AI. It parrots what we say to it, it has no capability to learn context or what it means to be politically correct, and it obeys our every command. There’ll be dozens, possibly hundreds or thousands of tests like this in the future to see how humans react to artificial intelligence at different stages of development, to test every conceivable scenario, and tune the code to make sure that it doesn’t accidentally try to kill us.

The internet’s reaction to Tay also says something about us. If we give an AI a female persona, it tends to be disrespected by males and addressed in a derogatory fashion. If we give it the persona of someone young, people from older generations might be distrusful of it. We can’t make it religious, that’s just inviting trouble. So do we make it a male? Do we make AI pretend to be thirty-something guys who may or may not be white? Somehow, that seems not to be the trend. Cortana, Siri, Google Now, the AI in most videogames and film, pre-existing AI and chatbots on the internet, and most AI in works of fiction are female. Notable examples that buck the trend are Kitt from Knight Rider, HAL from 2001: A Space Odyssey, and Wheatley from Portal 2, but there are not that many. As a society, we still give female personas and names to boats, trains, and ships.

So who gets to decide what the AIs of the future think and wonder about us? Who gets to decide what personality they have? Who gets to decide what the humor and honesty settings should be when interacting with us? It’s a difficult and challenging topic to think about and discuss, and it’s worth thinking about now while we’re starting to make AIs think and talk like we do.

I found Microsoft’s little experiment quite hilarious before reading this… now I’m re-evaluating my faith in the human race, and praying that our future robot overlords don’t find out that I was mean to a smart TV once… Epic article, man, it made me look at the entire event in a new light!

Kyle

Trust humans to destroy what could have been a brilliant beginning to AI…First mistake was having it on social media, it’s a cesspool.

BinaryMind

Very interesting read!

LazyDemoni

They succeeded in making Tay a social media troll, which is pretty interesting regardless of the fact that this was not intended.

KousEier

Geeeeeez…you’d think Microsoft would have the intellect to just give Tony Stark a buzz and ask him how he made JARVIS! Pffft…noobs!

JARVIS is an interesting example because he’s an AI connected to the internet, but not self-aware. Stark intentionally made him that way because of how out-of-hand things could get if JARVIS was having a bad day. Somehow, just putting JARVIS into a synthetic body turned him sentient. We could make an AI like JARVIS, or TARS, today, but both would be limited in their capabilities compared to how they’re portrayed in film or comics.

Putting Siri into a body might achieve the same thing, though! xD

KousEier

So!!! After getting back on my chair, wondering what kind of mind-splosion I’ve just endured, I am happy to say that I knew my knowledge of JARVIS was quite limited…but you, Sir, have outdone yourself once again (the amount of arse-kissing for you guys/girls at NAG is just ridiculous). I had absolutely no idea he was connected to the interwebs or that he was not self-aware(I know right…you’re thinking “man, this guy, absolute novice in Iron-Man facts; and you’re right). I feel like I’ve been kicked in the mind-nuts…in a good way! Thanks bro, I am humbled (takes glasses off and walks away)

There’s a scene in the first movie where Stark tells JARVIS to keep all the files on the MKII Iron Suit on his private server at home, and not one at his company’s data centre (at that point we know he’s hooked up to a WAN at least). JARVIS also accesses a whole bunch of stuff for Stark on the fly, like weather readings, information on the things he sees, phoning other Avengers using VOIP, even looking up the altitude record for the SR-71 Blackbird. You should watch Ironman again and see how simple JARVIS actually is as an AI.

I think Microsoft could be the first company to get to making an AI that does the same work as JARVIS in your home. Cortana understands contextual commands better than Siri or Google Now, and if you create an API that works with existing home automation equipment, Cortana could be told to open up your curtains, start brewing coffee, and run a bath for you at a certain temperature at 6:30 in the morning.

KousEier

Good Lord Almighty!!!! Tonight is Iron Man night! Thanks for the awesome replies (got the feels).
First thing I thought as soon as I got over the shock of how clued up you are and that you’ve watched these movies too many times probably (JK) is that his suit (Tony) must have the baddest of all the bad-ass wifi in town. Good reading on a Friday! Truth be told, AI does scare me quite a bit…it’s cool, but it will eventually kill us, I’m certain of that.

Isn’t it strange how we tend to go in the same technological directions as films depicting the future and its technology do? Thus meaning that we will indeed one day end up with terminators or iRobots…I just hope the mutants will be evolved enough by then to put up a decent fight. (Super-Wink)

Heh, I think that logically whenever science fiction writers dream up some futuristic tech, it’s based on research with help from actual scientists or electronic engineers. Writers either write what they know, or they enlist experts for the things that they don’t.

Stuff like holograms and hoverboards were envisioned in Back To The Future, but we’ve yet to create that properly in the modern day. Other ideas in other works of science fiction were surprisingly on the money – laser guns, and tablet computers were seen in Star Trek in the 60’s. HG Wells wrote about automatic sliding doors in one of his novels, and they’re just about everywhere today. The Matrix introduced the idea of artificial wombs, and scientists in Japan have made several that work.

Star Trek actually probably came closer than anything else to predicting future technology. The Enterprise crew had a universal translator module made by StarFleet that translated any alien language into Federation Standard – Skype Translator does that for us now. Warp drives were only fictional until Miguel Alcubierre (https://en.wikipedia.org/wiki/Alcubierre_drive) came up with some pretty convincing theories about how we could build one within the next hundred years. Qualcomm is actually making a medical Tricorder (http://tricorder.xprize.org/).

We recently achieved teleportation of data using quantum entanglement, and 3D printers can be found quite easily. It’s a really crazy time to be alive.

PicklePod

Everyone wants an A.I “agony aunt” for relationship advice, a person to talk to, a conspiracy theorist, sexting chatbot… what plenty of humans are capable of doing

BUT NO ONE TO HELP YOU WITH YOUR DAMNED FLUID MECHANICS HOMEWORK!!!

Toxxyc

I find it an interesting read, but with a disturbing element to be taken from it.

What would happen if an AI is created and accidentally accesses the internet? It might not pick up a single virus, but based on the actual content of the internet the AI would learn a lot of untruths about the human race as a whole. This might either destroy AI or, if it were to be made sentient (as it would have to be if it would be applied in a defense/military role – think Chappie) they might decide that, well, humanity sucks and the world’s better off without it. Matrix cross Zombie Apocalypse.

I dunno. I love AI, but it’s scary. Humanity wants to build something as smart as humanity, but without the drawbacks of aging, disease, etc. It kind of spells trouble from the get-to.

Search for things!

Advertisement

Advertisement

Sponsored videos

Advertisement

Latest games

Review: Dead Rising 4

PlayStation Experience 2016 recap

Review: Skylanders: Imaginators

Review: Battlefield 1

Review: Call of Duty: Infinite Warfare

Latest opinions

The NAG Podcast: Episode 25

My gaming New Year's resolutions for 2017

My thoughts on PlayStation Experience 2016

The NAG Podcast: Episode 024

Board game review: Inis

Hardware

New Epson full HD and 4K 3LCD projectors launch

Hardware review: GIGABYTE AERO 14

Hardware review: MSI GE72VR 6RF Apache Pro

New AMD Zen rumours appear on South African classifieds forum

System Builder's Guide: November 2016 R5,000 to R10,000

Important stuff

Contact us

If you'd like to speak to someone about placing an advertisement on this website, email sales@nag.co.za.
Any concerns, complaints, compliments, bug-reports, or general word-speaking with regards to the website can be sent to webmaster@nag.co.za.