The Next Great Interface

If you want to understand how augmented reality (AR) might be used in robotics, you only need to download Ironman 2 and watch the scene where the film’s robotic uber-exoskeleton’s visor projects background information about the people within its view. (We’re guessing the data came from matching the subjects’ facial structure with personal profiles stored in a database). Alternately, check out the AR.Drone flying video game, where players use their iPhones or other mobile devices to control small helicopter-like drones, essentially flying robots via their smartphones. Cameras on the aircraft send images back to the iPhone, and the game application allows users to interactively conduct an air battle, shooting virtual missiles at each other’s aircraft, and producing virtual explosions that show up superimposed on the phone’s onscreen video display.

AR, of course, has been around for years. In essence, the technology consists of real life (or an image of real life, such as a camera image) overlaid with one or more types of virtual items. You could define it as a mechanism that adds aspects of the virtual world to the real world. More precisely, it adds virtual sensory input-visual, auditory, or haptic (touch/force)-to the user’s experience of the real world. In an often-cited 1997 paper, “A Survey of Augmented Reality,” Ronald Azuma defined AR as having three characteristics: It combines the real and virtual; it is interactive in real time; and it registers (aligns) the real and the virtual in three dimensions.

However you define it, AR has become a ubiquitous presence of late. And that’s especially true in the case of televised sports events, where it’s used to help viewers see and better understand nuances of the competition. In a televised football game, for example, AR provides the yellow first-down line that shows up onscreen as if painted on the field. Likewise, in auto racing, information about the vehicles and drivers appears on the screen as a kind of cartoon caption hovering above each moving car.

Mobile communication devices such as iPhones have given rise to still more AR applications. For instance, a person can point the device’s camera at a street scene and bring up text on the display in the appropriate places, thus relaying information about the restaurants, museums, or other sites of interest within the camera’s view.

AR as an InterfaceMore recently, AR is finding uses within the field of robotics, often as a way to facilitate interaction between human and robot. As examples, AR can give a human operator information about and insight into a robot’s perception of its environment as well as previews of its intended motions. (Think of a virtual line that indicates the robot’s intended path through an office cubicle maze). Additionally, AR makes it easy to convey data about the robot’s internal state, such as battery charge, motor speed, or server status. Displaying such information atop a robot’s camera-eye view allows the operator to keep focused on the machine’s movements.

At least one study (“Augmented Reality Used for a Remote Robot Control,” Albeanu et al., 2008) shows how AR could be used to boost a robot’s performance when controlled by a remote human operator via the Internet. The operator received the robot’s two-dimensional camera view of items lying on a surface. The operator then applied three-dimensional wire-form models to the items in view and sent the data back to the robot. Using the models, the robot was able to grasp the items efficiently and complete tasks more quickly than it could by operating autonomously.

AR- and Robot-Assisted SurgeryRobot-assisted surgery allows a surgeon to control a surgical robot, often using a real-time video view of the surgery site. When the surgical instruments and the structures to be treated are visible on video, the process is relatively straightforward. However, when they are not visible, augmented reality can provide much-needed guidance to the surgeon.

One robot-assisted surgical technique, for example, uses haptic (touch/force) AR to guide a surgeon in preparing knee joints to receive titanium implants, a procedure used instead of the more traumatic full-knee replacement. The assistive system employs a robotic arm from Barrett Technology Inc., based in Cambridge, Mass., programmed to allow the surgeon to remove the diseased bone unimpeded. As the surgical instrument reaches the edge of the planned cavity for the implant, the arm creates a virtual force field that the surgeon can use as a guide to forming a cavity of the correct size and shape to ensure a good fit with the implant.

New robot-assisted procedures will undoubtedly make use of AR. As the authors of one study, “Design and Development of an Augmented Reality Robotic System for Large Tumor Ablation” (Yang et al., 2009), noted, “With the advancement in computer-based medicine, accurate diagnosis and precise pre-operative plans are available. However, effective and consistent intra-operative execution remains a challenge.”

The study investigated a robotic system using augmented reality in radio-frequency ablation (destruction) of liver tumors. At each location where it is inserted, an RF probe can ablate a certain volume of tissue. To treat a tumor of significant size, the surgeon must insert the probe correctly in a series of locations to produce overlapping ablation volumes. A robotic needle-insertion device was developed, as well as an AR system that projects a view of the liver and the tumor onto the patient’s abdomen, helping the surgeon visualize the required insertion locations. The paper concluded, “This complements the visual capability of the surgeons. Coupled with [a] robot-assisted surgical system, [an] operation can be executed with more consistency according to the surgical plan.”

AR in Robot Development and CollaborationEven in instances where AR is not part of the final robotic system, it can help streamline the development, testing, and debugging process, as shown by such studies as “Augmented reality visualisation for mobile robot developers” (Collett, thesis 2007, University of Auckland) and “Augmented Reality for Robot Development and Experimentation” (Stilman et al., 2005). For example, AR can provide designers with a robot’s-eye view of the environment superimposed on a real image, in order to demonstrate how well the robot’s sensor input reflects the real world. The Stilman paper states that the use of AR “helps resolve ambiguities regarding the source of experimental failures by precisely identifying the locations of the obstacles and the robot” during testing.

Similarly, AR can facilitate the testing of mobile robots by creating a partially simulated test environment, with either a virtual environment and a real robot, or a real environment and a simulated robot. This so-called semi-simulation technique allows a limitless variety of test cases, while preventing damage to the actual robot or anything in its path, resulting from collisions.

AR can also assist researchers seeking to enhance collaborations between robots and humans, where the human does what humans are good at-such as making judgments and decisions, dealing with unfamiliar situations-and the robot does what robots are good at-that is, repetitive tasks or venturing where humans can’t. With the appropriate systems and software, a robot may propose an action, such as a path through obstacles, and the human evaluates the proposed action and either tells the robot to go ahead or suggests an alternative.

AR-Robotics Challenges and NeedsNew applications for AR in the robotics world will continue to be developed, limited only by the imagination. In any system where people control robots, for example, AR-facilitated communication and collaboration between human and robot could significantly improve performance and aid in error correction and avoidance.

Notwithstanding, a perennial challenge for AR has to do with registration of the virtual image on the real world image. Even small misalignments or delays in movement can seriously degrade the usefulness of an AR view. Improved hardware is needed, such as 3-D cameras and other sensors that locate and orient the robot in space and detect features of the environment in three dimensions. Improved image-registration software will enable continued development. Improved display options, whether screen-based or head mounted, will help make the AR view more realistic.

AR-assisted testing and debugging will offer economical ways to complete these aspects of the robot development process. Semi-simulation software tools and associated sensors and displays will be needed.

While the AR currently used most often is visual, in applications relating to robots, novel haptics applications can provide additional capability, enabled by sensors, transducers, and actuators that give and receive touch and force feedback.

The Many Flavors of Augmented Reality Here are just a few of the ways people can experience AR:

Viewing on a computer monitor an image of a real environment with virtual objects superimposed on the real-world image.

Viewing the real world through a head-mounted display with a transparent visor through which the user can see the real world. Images of virtual entities are projected onto the visor so they align with the appropriate locations in the real world.

Viewing the real world directly, with virtual images projected on environmental surfaces.