People

Groups

Share this post

Children are growing up with technology that blurs the line between animate and inanimate objects. How does this interaction affect kids’ development?

By Stefania Druga and Randi Williams

The dichotomy between machines and living things is narrowing. Today, artificial intelligence (AI) is embedded in all kinds of technology, from robots to social networks. This affects the youngest among us as we see the emergence of an “Internet of Toys.” The trend is what prompted us to explore the impact of those “smart,” interconnected playthings on children. We’ll present our paper, “Hey Google, is it OK if I eat you?:Initial Explorations in Child-Agent Interaction," at the Interaction Design and Children conference at Stanford University on June 27. This blog post provides a preview of our findings.

Already, the “Internet of Toys” is raising privacy and security concerns. Take Mattel’s Aristotle, for instance. This bot, which is like an Amazon Echo for kids, can record children’s video and audio and has an uninterrupted connection to the Internet. Despite the intimate link Aristotle has with young children, Mattel has said that it will not conduct research into how the device is affecting kids’ development. In February of this year another smart toy, the interactive Cayla doll was taken off the market in Germany because its bluetooth connection made it vulnerable to hacking.

Three key questions

Beyond ethical and security issues, the emergence of these devices raises serious questions about how children’s interactions with smart toys may influence their perceptions of intelligence, cognitive development, and social behavior. So, our long-term research objectives are motivated by the following questions:

How could exposure to, or interaction with, these smart bots affect children?

What are the short- and long-term cognitive and civic implications?

What design considerations could we propose to address the ethical concerns surrounding these issues?

Our research presentation at the Interaction Design and Children conference is an early pilot that begins to explore such questions. The pilot consisted of a playtest study that took place in the Lifelong Kindergarten space at the MIT Media Lab. Twenty-seven children, aged between four and ten, interacted with a series of intelligent devices and toys. We observed as they played with the smart toys and asked the kids questions about trust, intelligence, social entity, personality, and engagement.

Children and their parents, along with study helper Mariana Tamashiro (at right) during the playtest session in the Media Lab's Lifelong Kindergarten group.

Credit: Stefania Druga and Randi Williams

Initial observations

Each of us brought different perspectives to the study, and sometimes that led to different responses. So, at this point, we’ll share our separate observations of how the children reacted during the playtest.

Randi: The biggest surprise for me was that the things that impressed me, as an engineer, were often not as important to the children. I expected everyone’s favorite toy to be Cozmo because of its technical abilities (despite its small size) and its expressive interactions. However, many children, especially the younger ones, were oblivious to Cozmo's most advanced features and had a hard time understanding Cozmo's expressions. They also thought that Alexa’s voice was nicer than Google Home’s because of the energy in former one's voice. Yet, from my perspective, Google Home had the more human-like voice. To be clear, Google Home and Alexa are not designed as toys for young children. However, many of the devices that are designed as toys treat the quality of their text-to-speech as an afterthought and our study suggests that it really matters to kids. It is important for the people creating these technologies to understand how children perceive these devices.

Stefania: I liked how the younger children were probing the nature of the different devices by asking conversational agents to eat an apple or to open the door. The kids would also use genders interchangeably when referring to the devices. For example, when Gus and Larby (6- and 9-years-old) were asked about Cozmo’s gender (because they alternated between “he” and “she” for both) Gus said, “I don’t really know which one it is.”“It’s a boy,” Larby said, “maybe because of the name. But, then again, you could use a boy name for a girl and a girl name for a boy.” Later on in the conversation, they concluded that Cozmo “is a bobcat with eyes,” and they expressed surprise at the device’s expressions. “He has feelings,” Larby said. “He can do this with his little shaft and he can move his eyes like a person—confused eyes, angry eyes, happy eyes.”

Gus (6-years-old) playing with Anki's Cozmo. Stefania asked, "Why is it important for the toy to have expressions?" Gus responded, "Because then they have...then they have a mind."

Credit: Stefania Druga

A lot of the participants were quick to raise questions that the devices couldn’t answer, like: “Do you have any arms?”; “What question do you have for me?”; or ”Do you have a boyfriend?” Both of us were surprised to see that most of the older children (6-10 years old) thought that the agents were more intelligent than they were, even if the devices’ current capabilities were still fairly limited. They supported this opinion by saying that devices were more intelligent because they had access to more information. Younger children (4-6 years old) weren’t as sure that the agents were more intelligent. We expect that children's perception of the devices' intelligence would change in future playtests when the participants would learn not only how the agents make sense of the world around them, but also how they, as users, could program the devices and influence their decisions.

We also loved that the older children we interviewed expressed the desire to design and program their own AI "friends." We’re curious to explore how young people could help us researchers perceive future applications of AI and machine learning in new ways—as new modalities for expression, interaction, and creativity— and not only as tools meant to improve our productivity.

Many of the children, such as 7-year-old Viella, attributed feelings and personality to the agents: "She is like a robot but more capable of feelings and giving answers," Viella said, adding that she'd like to play with the agent again “as soon as possible.”

How we organized the study

Over a period of six hours, we ran two separate sessions—one for a younger age range (four to six) and one for an older range (six to 10). In each session, participants were randomly divided into four groups. Then we assigned groups to a station where they could play with one of these devices, or "agents": Amazon Alexa, Julie Chatbot, Google Home, Tina the T. rex, My Friend Cayla, and Cozmo. Each station had enough devices for participants to interact alone or in pairs. At the stations, researchers introduced the agent and then invited participants to engage with it.

After playing with the first agent for 15 minutes, participants rotated to the next station to interact with a different device. Each session of structured play was followed by a questionnaire, in the form of a game, analyzing each child’s perceptions of the agent. We also interviewed three boys and two girls to delve into their reasoning. We selected those who'd played with various devices and displayed different interaction behaviors. Between interviews, participants could free-play with the agents. They clearly enjoyed that free-play experience and expressed the desire to do it again.

Why we collaborated

As we mentioned above, we come from different research areas: Randi is in the Personal Robots group, which studies human-robot interactions and seeks to build technology that can socially engage with people. Stefania is in the Lifelong Kindergarten group, which has a long tradition of developing creative learning activities and platforms for children, such as Scratch. We want to combine these missions and skills, to explore how kids could engage with “humanized” intelligent agents and how their interactions could be playful. As well, we share an interest in how children would be in tinkering and creating their own intelligent toys.

This video shows activities built with Ergo Jr. Robot in Scratch to teach kids how they could program and train a robot. (Credit: Stefania Druga)

Learning from others

We are inspired by Sherry Turkle’s book, The Second Self (first published in 1984), in which she argued that the intelligence of computers encourages children to revise their ideas about animacy and thinking. In her research, Turkle observed that children attributed intent and emotion to objects with which they could engage socially and psychologically. Prior studies had shown that children build relationships with these objects the same way they build relationships with people; that they considered electronic toys, even comparatively simple hand-held computers like the Speak-and-Spell, as ontologically different from other objects.

The big difference today is that these agents are more complex and widespread, becoming an integral part of children’s lives. To our knowledge, our study is the first that aims to reach a deeper understanding of how children perceive these modern intelligent agents through direct interaction. Some other recent studies have investigated certain psychological and social states that children would attribute to humanoid robots.

What’s next?

Stefania: I am organizing a series of workshops in public and private schools near the Media Lab in Cambridge, Massachusetts for kids to learn to design and program their own intelligent toys. Together with Eesh Chilukalapalli, a Lifelong Kindergarten intern, I created a series of Scratch Extensions that could enable kids to program modular robots to recognize and classify images or numbers, record a movement and play it back, or draw in collaboration with the user.

Randi: I’m planning for the next related study to be about long-term interaction with robots and other such devices. When a child has played with an AI for a long time and developed a relationship with it, how might that change their opinions about the way its mind and “emotions” function? I’m currently working on a preschool-oriented programming (POP) toolkit to introduce preschoolers to programming, robotics, and machine learning by allowing them to build and program their own robots. The platform consists of a mobile phone, LEGO blocks, LEGO WeDo motors and sensors, and a graphical programming language based on Scratch Jr. The PopBot blocks appeal to an assortment of kids’ interests, allowing them to use sensors, run motors, change LED colors, change the robot face, play music, and trigger robot animations.

In researching how kids could create their own AI devices in the future, we’re inspired by the philosophy of learning through tinkering and making—as seen in this photo of Scratchers taking part in the Light Play workshop at this year's Scratch Day @ MIT.

Credit: John Werner Photography

As we both look to the future, we believe that researchers and device makers should pay more attention to how children interact with smart toys. While adults envision the future of AI as focused on self-driving cars, personal assistants, and robot maids, children are more open to the imaginative possibilities. Their flexibility to see and interact with AI agents as entirely new entities is inspiring us to imagine and create novel forms of interaction. This could also become a new modality for expression, and for exploring not only the device’s nature but also our own.

More ways to explore

Except for papers, external publications, and where otherwise noted, the content on this website is licensed under a Creative Commons Attribution 4.0 International license (CC BY 4.0). This also excludes MIT’s rights in its name, brand, and trademarks. For papers and external publications included on this website, please contact the author(s) or publisher(s) directly for licensing information.