Forget the keyboard, mouse, and display as you take this photo tour through the development of surface computing. The desk is the computer

SURFACE COMPUTING 2008

At the Rio All-Suite Hotel & Casino in Las Vegas, bar customers can use the surfaces of their cocktail tables to chat and flirt with customers at other tables, design and order drinks, and play interactive games with groups of friends. In several AT&T stores, visitors can place a mobile phone model on a table to automatically bring up information about the phone’s features, or place two phones side by side for an instant comparison. And at the Innoventions Dream Home at Disneyland’s Tomorrowland, the kitchen counter recognizes ingredients placed on it and suggests recipes for using them. This is surface computing, circa 2008. The technology, also called tabletop computing, seems to have come out of nowhere, but it has actually been brewing in research laboratories for the past 15 years. Watch a demonstration here.

PHOTO: MICROSOFT

MICROSOFT SURFACE

Microsoft made headlines last year when it announced Microsoft Surface, which is the platform used at the Rio, the AT&T stores, and at Disneyland. Surface is a bottom-projected display that can recognize multiple simultaneous touches, as well as objects with special bar codes, by using cameras beneath the tabletop. Currently, Microsoft Surface is being sold directly to commercial partners. In this prototype application shown, Surface recognizes that a user has placed a camera phone on top of it and, using a Bluetooth connection, pulls the photos from the phone. The images seem to spill onto the desk. The user can resize or crop photos and then may either print locally or order prints from an online photo service.

PHOTO: PERCEPTIVE PIXEL

MAGIC WALL

CNN made the news when it paid more than US $100 000 earlier this year for its first Magic Wall, also called the Multi-Touch Collaboration Wall, from Perceptive Pixel. The news channel used the device throughout the election season. This implementation of surface computing owes its magic to a technology introduced in 2005 by Jefferson Han, a researcher from New York University. His ”frustrated total internal reflection technique” senses multiple simultaneous touches using a panel of acrylic, illuminated from the side by LEDs and observed by a video camera. Once the light from the LEDs enters the acrylic panel, it is trapped, or ”frustrated,” within the acrylic due to the angle of reflection. If the light hits a finger on the acrylic’s surface, however, the angle of reflection changes, and the light escapes at the points of contact, creating a clean image for analysis by the camera. Watch a demonstration here.

PHOTO: PIERRE WELLNER

DIGITALDESK

While the theory of surface computing has been around for a long time, the actual implementation of it dates back to 1993, when Pierre Wellner, then a researcher at Xerox EuroPARC, introduced the DigitalDesk. By mounting a projector and a video camera above his office desk, Wellner created a hybrid physical/virtual system. For example, he could use his fingertip to ”draw” a mark on a piece of paper: first the camera observed the motion of Wellner’s finger; the projector then displayed colored lines along the path created by his finger so as to create the illusion that he was drawing on the paper in ink. Watch a demonstration here.

DIAMONDTOUCH

The introduction of DiamondTouch in 2001, by researchers Paul Dietz and Darren Leigh, then with Mitsubishi Electric Research Laboratories in Cambridge, Mass., reinvigorated the field of surface-computing research. The DiamondTouch table was the first to incorporate multitouch and user identification in a surface computer. DiamondTouch’s display comes from a top projector; the table itself contains a grid of antennas. Users sit on chairs with conductive pads connected to the table. When the user touches the DiamondTouch surface, he creates an electrical connection between the transmitters (the antennas in the table) and the receiver (the special chair pad). The DiamondTouch surface can recognize multiple contact points, such as multiple fingers or hands, from as many as four simultaneous users. (The iPhone, which accepts input from two contact points, later introduced a limited form of multitouch to the consumer market.) The ”Lazy Susan” software pictured was developed with a specialized development tool called DiamondSpin . Watch a demonstration here.

PHOTO: DAN MAYNES-AMINZADE

THE ACTUATED WORKBENCH

The Actuated Workbench, developed at the MIT Media Lab in 2002 by students Gian Pangaro, Dan Maynes-Aminzade, and their professor Hiroshi Ishii, can produce physical, rather than merely virtual, output. Their table contains an array of electromagnets that can move pucks on the table’s surface. Such a system may be used for a variety of entertainment applications (your online friend’s chess move is reflected automatically on your physical chessboard, for example) or for education (to illustrate planetary orbits, say). Watch a demonstration here.

PHOTO: YASUAKI KAKEHI

LUMISIGHT

The Lumisight Table, developed in 2004 by Mitsunori Matsushita, Makoto Iida, and Takeshi Ohguro, from NTT Corporation and the University of Tokyo, addressed a key challenge of tabletop computing: the fact that users seated in different places around the table view things from different angles. Lumisight’s base contains a camera (for detecting fingers and objects on the surface of the table) and four projectors, each projecting an image that is ”right side up” for one of the table’s sides. Orthogonal layers of an optical film called Lumisty cover the surface of the table. This translucent material permits light to pass through only at certain angles, ensuring that only one of the projectors’ images is visible from each of the four sides.

PHOTO: PHILIPS RESEARCH

ENTERTAIBLE

In 2006, Philips announced that it was developing a product called the Entertaible, a touch-sensitive surface aimed at the consumer market that would enable consumers to play a variety of board games using special game pieces. That device has yet to reach the market, and the company has stopped promoting the project.

PHOTO: HAO JIANG, DANIEL WIGDOR, CLIFTON FORLINES, AND CHIA SHEN

UBITABLE

Users can interact with surface computers through auxiliary devices, such as laptops, phones, and PDAs. The display on the auxiliary device can convey private or sensitive content to a single user, while group-appropriate content can appear on the tabletop display. Chia Shen and her colleagues at Mitsubishi Electric Research Laboratories, in Cambridge, Mass., have explored auxiliary interactions with surface computers in their UbiTable project, in which two people with laptops collaborate over a tabletop display. Recently, Shen expanded the UbiTable into an interactive room called the WeSpace. People can share data on their laptops with other people in the room, using both a table and a large display wall. Here, three Harvard University astrophysicists discuss radio and IR spectrum images using the WeSpace.

PHOTO: MEREDITH RINGEL MORRIS

LA MESA DE CLASIFICACIÓN

Author Meredith Ringel Morris and her colleagues have developed a tabletop system for foreign language education called La Mesa de Clasificación. This system allows teachers to alter displayed content so as to encourage reticent group members to contribute more to the activity. For example, teachers can move interactive components, like vocabulary-word tiles, closer to certain sides of the table, or display real-time visualizations that reflect how often each group member has interacted with the surface.

PHOTO: PANASONIC

LIFE WALL

Surface computing won’t stop at the tabletop. Any surface—floors, kitchen countertops, car dashboards, or walls, like the Life Wall concept demonstrated by Panasonic in this photo—is potentially an interactive display.

About the Author

Meredith Ringel Morris is a researcher in the Adaptive Systems and Interaction Group at Microsoft Research, where she studies human-computer interaction and computer-supported cooperative work. She is also an affiliate assistant professor of computer science and engineering at the University of Washington. Morris received her Ph.D. in computer science from Stanford University; her dissertation introduced interaction techniques and user-interface designs for computer-augmented tabletops. She may be contacted atmerrie@microsoft.com.