A collection of observations, news and resources on the changing nature of innovation, technology, leadership, and other subjects.

September 17, 2007

Playing (Serious) Tricks on the Mind

The Turing Test was proposed in 1950 by the famous English mathematician and cryptographer Alan Turing, as a pragmatic and innovative approach to testing a machine's capability to exhibit intelligence. Turing developed it to help him consider the question: Can machines think?

In the Turing Test, a human judge engages in a written, natural language conversation with two other parties, whom he or she cannot see. One is a human and the other a computer. If the judge cannot tell which is which, then the computer would be said to have passed the Turing Test.

For years the Artificial Intelligence (AI) community has worked hard on developing software that could pass the Turing Test. One such computer program was ELIZA, developed around the mid-1960s to behave like - perhaps parody is a better description – a psychiatrist conversing with a patient. ELIZA did little more than rephrase the patient's statements as questions posed back to him or her. For example, if a patient said, “I am feeling unhappy,” ELIZA would simply reply, “Why are you feeling unhappy?”

A more serious experiment on machine intelligence was IBM's Deep Blue supercomputer, which famously defeated then chess world champion Garry Kasparov in a six game match in May of 1997.

I was reminded of the Turing Test recently, as I have been watching the huge progress we are making in social networks, virtual worlds and personal robots. Our objective in these applications can perhaps be viewed as the flip side of the Turing Test. We are leveraging technology to enable real people to infuse virtual objects - avatars, personal robots, etc - with intelligence, - as opposed to leveraging technology to enable machines and software to behave as if they are intelligent.

The latter, which includes programs like Deep Blue and autonomous robots, continues to be an important and challenging AI objective. But the former class of applications, with people providing the intelligence to the virtual objects, although perhaps technically less challenging than classic AI, has proven to be very useful on many fronts, and is leading to a rich variety of innovative applications.

Let me give a few examples. In a panel at a recent conference, I expressed the opinion that meetings, learning and training are the killer apps of virtual worlds. Killer apps are applications that turn out to be both useful enough and within the reach of the technology at its present stage of development to break out as market-creators. I am convinced that meetings and learning will help unleash all kinds of innovative new applications and propel the success of virtual worlds in the marketplace.

There is something almost magical in finding yourself in a virtual room full of people’s avatars - even though the people are physically located all over the world. I have experienced this feeling particularly when participating in virtual world events, such as a press interview conducted in Second Life to which the public was invited, as well as an all-hands meeting with employees of IBM China, physically conducted by Chairman and CEO Sam Palmisano in Beijing while many of us in different parts of the world joined him through Second Life.

It is amazing for those who have not experienced it to see how quickly you forget that you are in a virtual world, rather than a real one. You are fully immersed in the meeting or class, surrounded by avatars, each of which represents a real person. Everyone is interacting with you just as real people do, even though their representation might appear a bit weird at first. If your virtual location is recognizable to you - say, a classroom, auditorium or conference room - and the avatars around you are behaving realistically enough, then your mind quickly adjusts. You may as well be in a real room surrounded by real people.

While virtual worlds are still at a relatively primitive stage, they are beginning to take off for those applications for which the technology is adequate. Think of PCs and the Web in their early days, and of how much better they got over time. Something similar will happen for sure with virtual worlds as the technology improves, we get better tools and platforms and we learn much more about how to use the capabilities for meetings, learning and many other kinds of applications.

MIT's Media Lab is taking serious, technology-based mind tricks a few steps further through a variety of devices and physical objects that integrate into social networks and virtual worlds. A few weeks ago I briefly wrote about The Huggable, a project in the Media Lab's Personal Robots group.

The Huggable is a robotic communication avatar designed for social interactions, education, healthcare and other applications. It is essentially a very cute teddy bear, with a sensitive skin, embedded "intelligence" (i.e., hardware and software), wireless communications and the ability to see, hear, speak, touch and move. But what I find so compelling about The Huggable is that it can operate as an autonomous personal robot and as a semi-autonomous robotic avatar that is part of a human social network, providing a much richer set of interactions to the members of the network than is possible using PCs and similar devices.

There is a lot of activity in intelligent, autonomous, mobile robots for entertainment, education, protection and many other applications. Applications range from helping doctors and nurses perform their duties better (even remotely), to providing assistance to the elderly for improved mobility and strength (e.g., vacuuming, helping in getting out of bed, even companionship). Aging populations around the world are a big driver of personal robot products and applications.

The concept of semi-autonomous robots integrated into a social network is new, at least for me. But once you start thinking about potential applications, many come to mind, in areas as diverse as family communications, healthcare, education and entertainment.

For example, imagine faraway grandparents being able to interact with their young grandchildren who are holding and playing with The Huggable. You can talk, read stories and sing to them. You can (virtually) hug them. You can watch and listen to their reactions as well as sense the way they hold and touch the teddy-bear-like device. Imagine a similar scenario with soldiers stationed around the world, being able to interact with their young children in a far richer, more emotional and satisfying way than a phone conversation. Or imagine the help it could provide children not getting enough nurturing and stimulation from their parents, by enabling family members, professionals or volunteers to get involved in their care.

We are only beginning to discover the power of Internet-based social networks and virtual worlds. Physical devices like The Huggable add a whole new class of possibilities. You are playing tricks with minds, but in a somewhat different way than the tricks involved in the Turing Test. You are no longer trying to convince the mind that the software programs, avatars or robots exhibit human-like intelligence. You are now trying to convince the mind that the human-controlled avatars and semi-autonomous robots are realistic stand-ins for the people you are interacting with, so they do not get in the way as you immerse yourself in the application in as natural a way as possible. This is a subtle, but very different kind of mind trick, one that I am sure will result in many important innovations.

Comments

People fail the Turing Test every day.

It isn't how smart you make the 'bot; it is how persuasive you make it. The early AI researchers made the common mistake that logical thinking (chess) was a sign of intelligence instead of a sign of intelligent search in a solution space. To fool a person, engage their emotions intelligently.

The keys to intelligent avatars are:

1. Emotive intelligence (see HumanML work on modeling a vector scalar model for emotion engines) as proximity based acts.

2. Intelligent searching built into the avatar for synthesizing new behaviors based on existing templates.

But first, implement the impersonate() function for identity management and privileges acquisition when avatars enter new non-local worlds (ie, a server farm with a different 3D hosting framework than the one on which it was developed).