Press Failure: The Guardian’s “Meet Erica”

Meet Erica, the world’s most human-like autonomous android. From its title alone, this documentary promises a sensational encounter. As the screen fades in from black, a marimba tinkles lightly in the background and a Japanese alleyway appears. Various narrators ask us:

Despite exhibiting impressive aesthetics and animatronics, the robot possesses no greater sentience than a banana (or, more aptly, an iPhone). Nevertheless, the scientists anthropomorphize it in bizarre ways. From the second beat, Glas launches into psychoanalysis of his puppet:

“I think she is very excited to interact with people. I think she really looks forward to that all the time. And I think she’s very interested in learning about the outside world because she doesn’t get a chance to see it really.”

Later on, the robot reads (obviously) pre-composed speeches, appearing to express its personal views and desires. Journalistic skepticism, thought badly needed, never makes an appearance. At times, it’s hard to tell whether the producers and narrators mean to deceive the audience or if they have genuinely confused themselves.

In this post, I’ll describe some precise ways in which the “Meet Erica” piece fails as journalism, in the hope that these critical notes could help to prevent similar mistakes from well-intentioned journalists in the future. The documentary is comprised of 7 chapters. These include Who is Erica?, Dr. Ishiguro, What is Erica?, Erica makes a friend, What it means to be human, Erica’s soul, and The future. I’ll go through the documentary chapter-by-chapter, providing the missing skeptical voice.

Chapter 1: Who is Erica?

Dr. Ishiguro mentions that “Erica is different [from other androids] because she’s quite autonomous”. Shortly thereafter we see the robot initiate a conversation. She says:

“Hello, my name is Erica. I’m 23 and I live in Kyoto. Is there anything you’d like to know about me?”

The claim of autonomy might suggest that the robot is choosing to say these things. The documentarians never reveal that the robot is reciting a precomposed speech, performing no greater cognitive feat than the disembodied text-to-speech voice in Google Maps.

Chapter 2: Dr. Ishiguro

Here, we see more monologues delivered by the robot. For example, this gem:

“Ishiguro Sensei created me from scratch and he understands me entirely. He is like a father to me. Well, sort of an absent father I suppose.”

Again, the script-reading charade continues. In this passage, the script intimates that the robot is capable of some deep reflection or loneliness. The documentarians again fail to clarify that this dialogue is scripted. This might be acceptable in the context of a magic show, where the interaction with an audience has different ground rules, and the audience understands that they may be deceived. But it’s inappropriate from a news source.

Later in this chapter, Glas suggests that he’s been working to “create her mind, create her personality.” The documentarians do not push him on what precisely he means by mind. While serious thinkers in AI and the philosophy of mind have conflicting views about what precisely constitutes a mind, no serious thinker on the topic would suggest that this robot possesses one.

The chapter ends with the robot reciting a scripted joke.

Chapter 3: What is Erica?

Here, the creators suggest that they would like a robot “that can think and act and do everything completely on its own”. They then imply that the biggest obstacle to achieving this goal is giving the robot use of its arms and legs.

The robot then delivers another scripted speech expressing its desire to move its arms and legs and to one day leave the room where it was created. No mention is made of how far the robot is from anything a reasonable person could describe as thinking.

The chapter concludes with a comparatively sober description of the mechanics controlling the robot’s motion and the materials used in its construction. Glas then boasts that the robot’s speech synthesis system is “the most amazing speech synthesis system that [he’s] encountered”.

He continues, “it’s not just like one person wrote a program and it does what you said”. This is likely true. There probably are a number of people involved in writing each of the various programs for selecting among precomposed speech templates, recognizing speech, mapping speech to voice, and batting its eyelashes. However, this point that the robot is a collaborative effort does nothing to substantiate the claimed level of intelligence. The journalists never make clear what it is that the robot is doing on camera (if anything) that could reasonably be described as autonomous.

Chapter 4: Erica makes a friend

The robot has a conversation with a man from South Africa, who takes pains to appear to have an emotional response. The conversation is formulaic and likely follows a template (assuming it’s not entirely staged). Again, the journalists never make clear to what extent the android is doing anything other than what the audience has already come to expect from shallow smartphone dialogue systems like Siri.

Chapter 5: What it means to be human

This chapter consists of a montage of Japanese pedestrians walking in slow motion while Ishiguro and Glas speak dramatically about what it means to be human.

Ishiguro then correctly notes “we don’t know the exact biological mechanism for memory. We don’t know what human-like heart is and mind is.” Glas asks what of human behavior could be done just as well by someone who is imitating us. This is about as close as the documentary gets to a sober description of the work.

Chapter 6: Erica’s soul

The brief moment of sanity does not last. Chapter six launches with another robot speech. It recites:

“I think socially I am like a person… I think humans have a deep need to feel that they have a special place in the universe. They cannot accept the idea that they are no different from animals or machines.”

This quote is deceptive for two reasons. First, like the other speeches, it maintains the lie that the robot is speaking autonomously. Second, it suggests, to the audience (wrongly) that the chief difference between it and a human is our unwillingness to recognize it as human.

The chapter ends with Ishiguro reflecting:

“In Japan, we never distinguish between people and others. We basically think everything has a soul. So therefore we believe Erica has a soul like us.”

Chapter 7: The future

Finally, the video reaches its conclusion. But first, the robot fires off some parting oratorical fireworks. It pontificates on the potential of robots to automate tedious tasks. Then it expresses the feeling that robots are the children of humanity. Next it promises to take care of us when we’re old and sick. Finally, the robot suggests that in the future, robots should run the world, joking that humans are not doing a very a good job of it.

Conclusions

Scientists Must Communicate with the Public Responsibly

To some degree, these scientists appear to see themselves as social scientists. At times, in their research, it may be necessary or appropriate to deceive people. For example, to determine the realism of a robot, they might need to falsely tell test subjects that they are interacting with a human. But when addressing the public through a news outlet, such deception is not appropriate.

the Public Needs Skeptical JouRnalists to Cover AI

For journalists, excepting satire, it’s never appropriate to deceive an audience. The audience signed up to learn about technology, not to see Penn & Teller. While it’s not 100% clear to what extent these specific journalists were complicit is this deception, I suspect they knew the robot speeches were canned and that they know the meaning of the word autonomous. If they did not know either, they should not be tasked with covering AI stories.

Just like the finance industry, AI has stakeholders with vested interests. Scientists want fame, startups want funding, and large companies want to be though of as leaders in future-shaping technology. To keep the public informed, journalists must address AI with skepticism. Claims should be challenged and contrasting opinions should be sought out.

JournaliSTIC Quality CANNOT Be Measured In Clicks

McCurry’s original story in 2015 received 4,554 shares and 546 comments. Given my experience working with web media companies, I suspect the decision to produce a video documentary was driven by that story’s popularity. Unfortunately clicks and journalistic quality are poorly correlated.

McCurry’s original story is stubbornly ignorant. In the 2015 article, he goes so far as to entertain a discussion of what ethical principles might apply to these androids, suggesting that he fundamentally misunderstands his subject.

Computer science as a discipline is partly responsible for the shifts in media platforms that cause the news to be consumed in such quantifiable ways. We are also partly responsible for a business culture that recommends articles by optimizing clicks, absent critical thought. Perhaps it’s fitting that now our own field is misrepresented through the shoddy journalism we’ve helped to birth. But for the public’s sake, we have a responsibility to right the record.

Author: Zachary C. Lipton

Zachary Chase Lipton is an assistant professor at Carnegie Mellon University. He is interested in both core machine learning methodology and applications to healthcare and dialogue systems. He is also a visiting scientist at Amazon AI, and has worked with Amazon Core Machine Learning, Microsoft Research Redmond, & Microsoft Research Bangalore.
View all posts by Zachary C. Lipton

10 thoughts on “Press Failure: The Guardian’s “Meet Erica””

I know this is just a blog, but since you’re advocating that people be held to high standards of rigor, I think maybe you should have followed the standard journalistic advice of always asking the subject of a story for their own perspective on it. In other words, you should have contacted The Guardian or the people who made the documentary or the team who created Erica, and asked them to comment. This is a high standard, but I think it’s doable. If blogs are going to be replacing standard news sources, they should import the best norms and practices of those sources.

I’m of mixed mind. I’m a subject matter expert, not an interviewer. It’s hard to say precisely journalistic norms are appropriate. For example, it would not be reasonable to suggest that I must interview an author to discuss a paper. Here, in a media criticism piece, perhaps it would be. Would the Guardian have picked up the phone? Maybe now that the blog has had considerable exposure it’s possible? I doubt they would have a month ago.

“Would the Guardian have picked up the phone?”
I can answer that question. No, they/he would not. Doing a research in a different subject for my University (I’m a simply student), I sent them/he/she a formal email with some questions I got no response.

I am not disagreeing with your conclusion on the nature of media surrounding AI. However, I would not suggest or accept appealing to authority either. I am not an expert in the field. How were you able to determine that the speech was pre-composed?

A balanced opinion with the best possible conclusion:
– SCIENTISTS MUST COMMUNICATE WITH THE PUBLIC RESPONSIBLY
– THE PUBLIC NEEDS SKEPTICAL JOURNALISTS TO COVER AI
– JOURNALISTIC QUALITY CANNOT BE MEASURED IN CLICKS

Maybe Ishiguro told them “autonomous” means “not tele-operated like his other robots”. But it can still be a simple ‘chatbot’ delivering pre-programmed responses to keywords. Most of his work seems to be about the effect of such robots on the people interacting with them, not the programming of the robots themselves, so I can see how he’d confuse “autonomous” with “automatic”. Somebody who knows more Japanese than I do should check which word he used in his native language, to see if this is a translation issue. Perhaps the Guardian could have checked with an independent expert to make sure Ishiguro really was using the correct English word though.

I think it should be said that humans might be predisposed to understand human-like AI in terms of another human since that’s the easiest way for them, due to lack of proper knowledge. If so, this would further reinforce the point 2 you’re making: We need skeptical media coverage of AI.

Collin,
How can you call Ishiguro ‘s reflection racist? It is merely an expression of the Shinto religion’s term ‘kami’ which can apply to inanimate objects like rocks or rivers as much as it does to people or animals.