When you become a robot, everyone smiles at you. That’s what happened when I appeared in Technology Review’s office in Cambridge, Massachusetts, in the form of a four-foot-high wheeled robot with a small screen, speakers, camera, and microphones on top. The screen and speakers let my distant coworkers see and hear me at my desk in San Francisco, and the camera and microphones sent their images and voices back to me. While I sat at my computer, I could roll the robot into meetings or seek people out in their cubicles 3,000 miles away. But the smiles directed at the other me were rarely the kind I would want to receive in person. They were more like those directed at a cute but annoying toddler.

My robot body, a loan from Vgo Communications, could do some of the basic things I would do in person: move around the office to talk and listen, see and be seen. But it couldn’t do enough. In a group conversation, I would clumsily spin around attempting to take in the voices and body language outside my narrow range of vision. When I walked alongside people, I sometimes blundered into furniture, or neglected to turn when they did. Coworkers were tolerant at first, but they got frustrated with my mistakes—much as they would if I struggled to keep up or smacked into a chair during a professional conversation in person. And my embarrassment after such errors burned no less than if I had been there in the flesh.

Vgo’s and other telepresence robots just appearing on the market are embarking on the first real-world test of what happens when humans attempt to use machines as their physical stand-ins. An office near you may well be part of this experiment, because, imperfect though these body doubles are, they address a need unmet by other communication tools. Increasingly we work at a distance from our colleagues, heavily reliant on phone calls, instant messages, videoconferencing, and e-mail. But face-to-face contact in the same office is still considered the gold standard for collaboration. That’s not just a gut feeling. It’s backed by research showing that trust is harder to build, and more brittle, when we use telecommunication technology than when we interact in person. Telepresence robots are the first technology that attempts to replicate in-person contact and the benefits of being physically near people.

Rather than humanoids, these are wheeled robotic stick figures—and yet they can be more human than any other technology in use today. The physical footprint they occupy is similar to a person’s, and users guiding the machines can try to act as they themselves would if they were there. That encourages people interacting with the robot to behave as if it were a person. Users on both sides report better recall of situations and meetings relative to videoconferencing or phone calls.

Hi, Robot: The Vgo machine lets the San Francisco–based reporter converse with a colleague as he walks down a corridor at Technology Review’s Cambridge office

Some of the earliest adopters are bosses who want to see and be seen by their underlings at any given time. “One of our clients is CEO of a 40-person tech company in Silicon Valley that has half its engineers in Russia,” says Trevor Blackwell, founder of Vgo’s competitor Anybots, which began selling its telepresence robot in late 2010. Similarly, one of Vgo’s first users is Neal Creighton, CEO of the Web startup RatePoint. Creighton and his sales manager are often on the road and need to keep their sales force of 25 people motivated. “It’s possible to roll around, be seen to be listening to the calls going on, and provide coaching based on what we hear,” says Creighton. “I’d estimate we see a 15 to 20 percent lift in results compared to not having the robot.” Another Vgo owner uses the robot to inspect goods rolling off production lines in China.

For a business tool, these robots are relatively cheap, especially when compared with the cost of travel or high-end videoconferencing systems, which can run into the tens of thousands of dollars. Anybots charges $15,000 for a bot. Vgo charges $5,000 up front plus a yearly support fee of $1,200—less than a business-class ticket from San Francisco to Beijing.

Unfortunately, having robots stand in for people “makes them subject to dangerously high expectations,” says Clifford Nass, a Stanford University professor of communications and computer science whose recent book The Man Who Lied to His Laptop explores how we relate to technology. “If they were built like TV sets on wheels it would be totally different.”

Almost all studies of how humans interact with robots have looked at autonomous, intelligent robots capable of helping people in their homes. But the lessons of that research are equally applicable to tele­presence robots, Nass argues, because they are meant to simulate a human’s roles and behaviors. One such lesson is that humans expect robots to act like humans—to follow human rules about social space and body language. When the robots fail at that, people feel insulted and annoyed. Other experiments have shown that robots—and people—that move jerkily or react slowly are routinely judged to be of inferior intelligence. Such failings in a telepresence robot may reflect on the person controlling the machine, not the offending electronics, and could harm the very professional relationships these machines are supposed to help. “We give other people credit and blame for the body they have, and in this case your body is the robot,” says Nass. “It’s meant to be you, and so it better act like you.”

Practice at using a telepresence robot does cut down on the blunders. But despite their pilots’ best efforts, they will inevitably move and act strangely at times because of their less-than-human senses and range of motion. Their operators will also have to shoulder the burden of technical glitches. I found the fallout of a poor connection caused by a flaky wireless router, for example, to be nothing like that of a dropped phone or video call. In one meeting, as the audio connection faltered and my voice broke into digital static, I could see the annoyance spread on my colleagues’ faces. When the connection dropped entirely, I was embarrassed that my body had become their problem, stranded in the middle of the room. When I logged back in, I was being borne across the office in someone’s arms like a child.

A logical response from the builders of these robots is to keep fine-tuning them. Willow Garage, a company in Menlo Park, California, that is developing its own telepresence robot, found in field studies with local companies that bad driving of the machines has more serious consequences than scratched paintwork. “The pilot tends to feel very embarrassed and laugh,” says Leila Takayama, a researcher at Willow Garage. “That makes people feel that the pilot is goofing around and not taking them seriously.” The company is experimenting with algorithms that take over and steer a person around obstacles, a strategy that Anybots also uses. It’s possible to imagine other technical features that could act as social prosthetics; for example, a robot could be programmed to automatically respect personal space.

Upgrading telepresence robots to make them slicker at navigating the office may actually make the problem worse, though. Nass cites the “uncanny valley” hypothesis, which holds that technologies too closely imitating human form and behavior can elicit more negative emotional responses than less realistic ones. In other words, adding features that improve the ability of these robots to act human may actually make them less easy to tolerate. To get their machines out of the uncanny valley, engineers will need to make them either incredibly human-seeming or less so than they are now. The former won’t happen any time soon, and the latter seems undesirable for telepresence robots whose reason for being is to fill in for people.

If these robots are to introduce an era of effortless interaction with machines, the changes may have to come from us, not them. “As these machines appear in the workplace, we will see completely new social norms forming around them,” says Takayama. We humans may have to learn to judge people represented by electronic bodies differently from those we can see in the flesh.

Tom Simonite is Technology Review’sIT Editor for Software and Hardware.