Researchers in the Personal Robots Group at the MIT Media Lab, led by Cynthia Breazeal, PhD, have developed a powerful new “socially assistive” robot called Tega that senses the affective (emotional/feeling) state of a learner, and based on those cues, creates a personalized motivational strategy.

But what are the implications for the future of education … and society? (To be addressed in questions below.)

A furry, brightly colored robot, Tega is the latest in a line of smartphone-based, socially assistive robots developed in the MIT Media Lab. In a nutshell: Tega is fun, effective, and personalized — unlike many human teachers.

Breazeal and team say Tega was developed specifically to enable long-term educational interactions with children. It uses an Android device to process movement, perception and thinking and can respond appropriately to individual children’s behaviors — contrasting with (mostly boring) conventional education with its impersonal large class sizes, lack of individual attention, and proclivity to pouring children into a rigid one-size-fits-all mold.

The preschool classroom pilot

In the evaluation study, the child sat across the table from the robot. The tablet was placed on the table between them. The smartphone collecting data via Affdex sat beside the robot. The child wore headphones that were connected to the tablet and to the robot. (credit: Goren Gordon et al./AAAI)

Testing the setup in a preschool classroom, the researchers showed that the system can learn and improve itself in response to the unique characteristics of the students it worked with. It proved to be more effective at increasing students’ positive attitude towards the robot and activity than a non-personalized robot assistant.

The researchers piloted the system with 38 students aged three to five in a Boston-area school last year. Each student worked individually with Tega for 15 minutes per session over the course of eight weeks.

The students in the trial learned Spanish vocabulary from a tablet computer loaded with a custom-made learning game. Tega served not as a teacher but as a peer learner, encouraging students, providing hints when necessary and even sharing in students’ annoyance or boredom when appropriate. (Teaching vocabulary is an ineffective method for teaching languages, but lends itself to a controlled experiment.)

Personalizing responses

The system began by mirroring the emotional response of students ­­— getting excited when they were excited, and distracted when the students lost focus — which educational theory suggests is a successful approach. However, it went further and tracked the impact of each of these cues on the student.

Over time, it learned how the cues influenced a student’s engagement, happiness, and learning successes. As the sessions continued, it ceased to simply mirror the child’s mood and began to personalize its responses in a way that would optimize each student’s experience and achievement.

Over the eight weeks, the personalization continued to increase. Compared with a control group that received only a mirroring reaction, students with the personalized response were (not surprisingly) more engaged by the activity, the researchers found.

“We know that learning from peers is an important way that children learn not only skills and knowledge, but also attitudes and approaches to learning such as curiosity and resilience to challenge,” says Breazeal, an associate professor of Media Arts and director of the Personal Robots Group at the MIT Media Laboratory.

“What is so fascinating is that children appear to interact with Tega as a peer-like companion in a way that opens up new opportunities to develop next-generation learning technologies that not only address the cognitive aspects of learning, like learning vocabulary, but the social and affective aspects of learning as well.”

The experiment served as a proof of concept for the idea of personalized educational assistive robots and also for the feasibility of using such robots in a real classroom. The system, which is almost entirely wireless and easy to set up and operate behind a divider in an active classroom, caused very little disruption and was thoroughly embraced by the student participants and by teachers.

“It was amazing to see,” said Goren Gordon, a visiting AI researcher who runs the Curiosity Lab at Tel Aviv University. “After a while, the students started hugging it, touching it, making the expression it was making and playing independently with almost no intervention or encouragement.”

The study showed the personalization process continued to progress even through the eight weeks, suggesting more time would be needed to arrive at an optimal interaction style. The researchers plan to improve upon and test the system in a variety of settings, including with students with learning disabilities, for whom one-on-one interaction and assistance is particularly critical and hard to come by.

“A child who is more curious is able to persevere through frustration, can learn with others and will be a more successful lifelong learner,” Breazeal says. “The development of next-generation learning technologies that can support the cognitive, social, and emotive aspects of learning in a highly personalized way is thrilling.”

The team reported its results at the 30th Association for the Advancement of Artificial Intelligence (AAAI) Conference in Phoenix, Arizona, in February. The work is supported by a five-year, $10 million Expeditions in Computing award from the National Science Foundation (NSF), which supports long-term, multi-institutional research in areas with the potential for disruptive impact.

Probing the uncertain future of socially assistive robots

Speaking of disruption, exactly where is this technology leading us? Don’t get me wrong — I totally love Tega and I want a Chinese version right now for studying Mandarin. But …

Since socially assistive robots are apparently more fun (and presumably more effective at teaching) than humans, what are the possible effects on a child’s development, especially if implemented widely? Is it safe to exclude human teachers from learning?

Could socially assistive robots lead to depersonalization — preferring robots and computers to human contact, especially contact with relatively aversive human teachers (who won’t have the patience of training to continually give students positive feedback like peer robots and will make demands on students, or punish them for not following instructions)? Will it cause future children to become functionally autistic?

Could affective (emotion- and feelings-based) feedback from robots (or other devices) eventually lead to entraining a child’s (or adult’s) mind and eventually expose us to more powerful control by individualized, affective-based advertising, entertainment, media, and political and religious doctrine — and by charismatic psychopathic leaders*? It may be a small step from “assistive” to “controlling.”

Will such hypothetical entrainment (or “entertrainment”) — enhanced by immersive (VR/AR) technologies and powerful social media — make us more likely to become (and even more) passive society that is highly influenced (or even controlled) by increasingly more intelligent machines that are also more fun and useful than people … or controlled (in proprietary versions) by those technologies’ makers, programmers, owners, or investors?

Such future enhanced assistive robots may have the ability to assume advanced animal, humanoid, alien, and other charismatic forms that could morph in real time and perform astounding, advanced theatrical events and even become an invisible part of our milieu — eventually becoming the dynamic real-time embodiment of an evolving, omnipresent/all-powerful intelligence that develops into superintelligence. Where does that take us?

* No reference to current political candidates implied. :)

MIT Media Lab Personal Robots Group | Tega: A Social Robot

MIT Media Lab Personal Robots Group | Learning a second language with a social assistive robot

Abstract of Affective Personalization of a Social Robot Tutor for Children’s Second Language Skills

Though substantial research has been dedicated towards using technology to improve education, no current methods are as effective as one-on-one tutoring. A critical, though relatively understudied, aspect of effective tutoring is modulating the student’s affective state throughout the tutoring session in order to maximize long-term learning gains. We developed an integrated experimental paradigm in which children play a secondlanguage learning game on a tablet, in collaboration with a fully autonomous social robotic learning companion. As part of the system, we measured children’s valence and engagement via an automatic facial expression analysis system. These signals were combined into a reward signal that fed into the robot’s affective reinforcement learning algorithm. Over several sessions, the robot played the game and personalized its motivational strategies (using verbal and non-verbal actions) to each student. We evaluated this system with 34 children in preschool classrooms for a duration of two months. We saw that children learned new words from the repeated tutoring sessions, the affective policy personalized to students over the duration of the study, and students who interacted with a robot that personalized its affective feedback strategy showed a signifi- cant increase in valence, as compared to students who interacted with a non-personalizing robot. This integrated system of tablet-based educational content, affective sensing, affective policy learning, and an autonomous social robot holds great promise for a more comprehensive approach to personalized tutoring.

Magnificent toy, but nothing more. Use of it could help with education and correction of many problems, but it obviously cannot replace a human instructor in a class room.
An artificial system, capable to replace of a human instructor, should be subjective, capable to have its own subjective experience.
Only subjective systems could behave itself without any needs in further programming.
It is possible to design them? Yes, and such design is much simpler than design of new and new surrogate systems.

” it obviously cannot replace a human instructor in a class room.”
maybe not yet.
But, considering the poor quality of teaching (and, unfortunately, even teachers as people, not just as educators) there is lots of low hanging fruit for robotics/AI to pick.
It’s been known for a long time that teaching large groups of children (or adults, for that matter) is not the most effective way to teach, just the most cost efficient.
Robots/AI will eventually produce a more cost efficient AND more effective way of educating by
*personalisation of the teaching – a personal teacher matched to the student, rather than the student having to adapt to the teacher (yes, I know that we also learn things from having to adapt like this)
*applying best practice – human teachers are of varying (and, IMHO, generally low quality – you get what you pay for). Each Robot/AI could apply best practices sourced from the best human teachers/other sources)
*at some point, they’ll be cheaper than human teachers, eventually a lot cheaper.

It will be interesting to see how long it takes for us to find the medieval church/19th century factory concepts of “teaching” and “classroom” hilarious. I’m guessing about 5 years, when “deep education” (to coin a phrase) and robot workers are ubiquitous, and there are no “jobs” (another funny word) to prepare for. Generation Z, the Matrix/Borges Tower of Babel generation with its total access to/immersion in self-reinforcing mixed-reality miasma will have taken over and the future dissolves into a blur…..

I’d be surprised if it were that quick, 5 years. But, by then I think it will be very clear to many that we will soon be able to “educate” our-selves/children more efficaciously, though I doubt whether most of us will have also realised by then that the last thing we want to do to our children is educate them in the traditional sense, i.e. that our ideas of “educate” need to radically change for the good of our children. So, not in time for my early-teens son, but maybe my grand-kids. I also doubt whether states will, like in many other areas, be able to adapt quickly or be inclined to give up their control here and elsewhere to systems/individuals that can react more quickly.

The main problem that I see as an educator is the ratio of student/teacher and the value that pupils place on education which is often directly related to the socio-economical environment where the children live. I find myself working in a classroom with 2 and 3 dozen students, and try as I may I cannot give the attention that an AI personal tutor would give to every child. But that shift from human to AI teacher will happen only when machine learning progresses to the point of understanding and expressing accurate human emotions at the appropriate time using all aspect of language. In other words, AI needs to become a personal friend/supervisor capable of having a relationship with a student like a human teacher has.

Bingo! To say this is a teacher may be a misnomer. The individual attention makes it more of a *tutor*, which almost always provides a superior learning experience. Admittedly, this is a good step toward such a thing, but it only a first step. Throw in a decade or two of faster machines with better programming, and it will probably start to make “factory” instruction look more like “cruel and unusual.”

That said, I think very highly of some teachers of my own; they facilitated personal growth, inspiration, and asked the right questions. That last may be the most important of all: how else can you find the answers that work best for you? I doubt students would talk with this puppet about extracurricular things that matter a lot, nor is it likely to sense the subtle clues that point to problems at home, another important social function almost unique to teachers.

Gorden .. Scary stuff when you consider there are or could be Alien Technology that may be millions of years ahead of us. Also may be friendly or unfriendly as the case may be. .In such a situation our technology would be totally useless.
Just my opinion, Larry.

As an educators (substitute teacher) who works with elementary school kids every day, I’m honestly convinced that advanced AI will one day replace teachers. But this socially assistive robot technology is not even close to be advanced enough to replace human educators. Once we develop an AGI that is basically like a person/friend with the capacity to give the same response that a biological person gives, then we’d have in our hands a tutor with the capability to supervise children and to give personalized education. On that day, human educators will become obsolete. Given the advances in machine learning, my gross estimate is that that type of AGI will be a reality somewhere around 2030; I might be wrong in my estimate but I don’t think by much.

Yes, as I stated: “…when future assistive robots, enhanced with deep learning and virtually omniscient access to information…” — not currently. I don’t know of any evidence to support any specific development time for such technology. As for “the same response that a biological person”: I’m more interested in machines that far exceed the narrow skills of human educators, who seem more focused on shaping minds to conform with convention and the past rather than help create the geniuses needed to keep up with superintelligence. It takes genius to create genius, and geniuses don’t have any interest in babybsitting kids.

Amara, understood, although I did not claim to know when AGI will arrive since I used the words ‘gross estimate’. Nonetheless we can extrapolate approximate developmental time lines as Ray Kurzweil does with IT. We do know that machine learning is happening at an exponential pace and very recently Google’s AlphaGo beat the world’s champion Go player from South Korea 4-1, 10 years earlier than people thought possible. Also, my experience working with kids tells me that a future AI will need to be able to praise or reprimand a child at least as adequately and timely as a biological teacher. By “the same response than a biological person” I meant to understand and to express accurate human emotions at the appropriate time using all aspect of language. Finally, I didn’t understand what you meant by “geniuses don’t have any interest in babysitting kids” since there are geniuses that showed, at least virtually, interest in, among other things, babysitting kids, ie, Bill Nye “The Science Guy” or before him Jim Henson.

Wait a minute! I always love to agree with you, Amara, but as an engineer, maybe you didn’t take enough of the offered Psychology electives to learn about this famous Swiss Psychologist (or maybe you just forgot about him):

From Wikipedia

“Jean Piaget (French: [ʒɑ̃ pjaʒɛ]; 9 August 1896 – 16 September 1980) was a clinical psychologist known for his pioneering work in child development. Piaget’s theory of cognitive development and epistemological view are together called “genetic epistemology”.

Piaget placed great importance on the education of children.”

This genius learned a lot with kids, but they must have thought of him as this old, genial, babysitter.

On the other hand, wisely used, such ‘bots could make compassionate, well-informed, sane humans out of such people. Which I’m hoping they can do to me. (I suspect this will require further development of ethics as a science; for clarification of what this means and why it’s important, I recommend Sam Harris’ book “The Moral Landscape” in the strongest possible language.)

I don’t see it as a bad thing. We probably would put a lot less strain on our ecosystem and environment by consuming and polluting less via the current means of travel. Even the space we consume could be far less. Some people might find it frightening I see it more as liberating.

In a few years Education will probably be all in home via Internet or some other suitable way. At that point it would be a need to see the pupil response on the Computer he or she is using. All teaching or most of it will probably be by AI or similar system. Big Brother will probably be a long forgotten thought.? Just my opinion Larry.

Just that everybody is aware. Apple just recently acquired such a company that was looking at emotional facial expression to determine the mood and emotional content. So I would stay away from any further Apple products in the future and definitely keep them out of the hand of children.

They are probably discounting the novelty effect and how the child will respond to the device after a few weeks when they get bored with it, particularly older children.

Having a live camera feed in a classroom that feeds to external systems will disturb many parents, therefore the entire concept may be blocked on privacy grounds.

I have a very low opinion of the traditional education system and have taken steps to ensure my own children have a 5:1 student tutor ratio, but I do not think that machine intelligence is anywhere near being an adequate replacement for humans in this area. Particularly when children can actually employ rather sophisticated strategies to subvert learning activities if they are not interested in the subject for whatever reason and you can’t rely on them signalling that they are bored either because they know that will elicit an intervention, so they will just “throw spanners in the works” to sabotage the process.

I have a rule, that excludes sending my kids to a standard school, or even child care, “Never put people in charge of your kids if their IQ is lower than the child’s.” This also applies to systems and machines. I can’t articulate it fully in logical terms but the concept of the transmission of knowledge and wisdom from older to younger humans does seem very real to me and so does it’s importance. There is a synergy there that goes way beyond good curriculum and motivation etc.

I’m really glad you said this, DSM: “Never put people in charge of your kids if their IQ is lower than the child’s.”

It wasn’t really good for my nine-year-old self to learn that I knew more about dinosaurs than my 4th grade teacher. When I told her that there were no cave men alive in the time of the dinosaurs, she took me out into the hallway and snarled, “You’re not going to make a fool out of me!” Then she shook me the way a terrier shakes a rat. I heard the seams rip in my shirt.

Hmm. I wonder if it is better to be dumb or play dumb? I saw a show last night claiming it is possible there are Alien AI that may react unfavorably to our AI entities. Probably just conjecture? Certainly hope so..Larry

Wish I’d seen that, Larry. Was it a cable network? They repeat their shows from week to week.

Do you remember my conjecture that the answer to the Fermi Paradox is that the aliens are here, walking among us, studying us the way our anthropologists would study the !Kung, or the headhunters of Irian Jaya?

Gordon, It was on the science channel, If I remember right. . At the same time they showed pictures of ruined structures on Mercury, Venus and Mars. Intimating about how this tied to their Demise. Larry.

Gorden , Check out if you have the Science Channel Look up How the Universe works. Under Venus, Earths evil twin. Comes on in next day I think.
This should be the same article I saw before, I think. Comes on at 5 PM.
Larry.

But let’s think about alien AI entities for a minute, Larry. Aliens as aggressive as the Klingons, or the aliens from the movie “Predator” or the Ridley Scott film “Alien,” might design some very aggressive AIs.

But the aliens of the film “ET” would have AI entities that would walk calmly among us, just observing.