Created

16 August 2010

In recent years, surface computing has gained a lot of steam. "Surface computing" is a broad term that is promoted by proponents of large-screen interactive display technologies. Several such technologies have been developed, including Perceptive Pixel from Jeff Han (NYU), reacTIVision from reactable, Touchlib and OpenTouch from NUI group, and Microsoft Surface from Microsoft Research.

Although not all of these systems are open or have an openly available SDK, reacTIVision and OpenTouch are both open source solutions housed in similar architectures. They both have a client/server design, with the server being a computer vision engine detecting fingertips or objects placed on the surface, and clients being the interactive applications that employ the multi-touch and object sensing interactions. The server communicates with the clients using the Tangible UI Object (TUIO) protocol.

However, as it turns out, TUIO is a very simple UDP-based protocol. So if you wish to create responsive applications using Adobe Flash Professional or ActionScript 3.0, you need to have a bridge that reads this UDP socket and converts it to a TCP connection. Since TUIO is based on the OpenSound Control (OSC) protocol, we make use of flosc (Flash OSC), which is a small Java server that converts OSC into a TCP-based XML socket, which can be read easily by the XMLSocket class in Flash. If this doesn't make much sense to you, just remember that you need to run Flash OSC to make sure your applications can use the multi-touch events being pushed by the Computer Vision Engine.

It's that simple to set up your own surface computing platform based on Flash. It's quite easy to create small applications based on this design, and most of the applications that you find online are based on this architecture. At Synlab at Georgia Tech, we have two large (55-inch diagonally) reacTIVision setups where many student enthusiasts create interactive surface applications on different platforms. Having a diverse pool of designers and programmers working at our lab, we have seen certain UI and programming design guidelines evolve over the last year that may be useful for everyone. Our surface is quite large, and it can have multiple users working on it at once. Also, since we primarily use reacTIVision at Synlab, our projects have a bias toward object sensing more than employing touch gestures. Here are some of the guidelines:

Do not simply adapt your mouse-click based application to a touch-based application.

Gestures, which are composed of multiple touch-events, should be recognized by your application to have a natural user interface.

Gestures have a dynamic nature. Users may begin with one gesture and end up with another. Try to choose the gestures that suit your application best.

Object recognition can be effectively utilized in conjunction with touch sensing and gesture recognition.

It is necessary to create UI components that allow for concurrent use by multiple users.

UI components must be able to dynamically listen to different custom events according to the interactivity.

We can see that a gesture recognition framework would come in handy in our quest for making scalable interactive applications that harness multi-touch events (from multiple concurrent users). One simple model is to create a small library of touch, multi-touch, object sensing, and basic gesture events that are dispatched by manager classes.

Finally, as an example to cover this discussion and to help you get started, let me briefly discuss a simple application that recognizes when two objects get closer, or go farther away on the surface, and the application performs some action based on this tangible interaction. The two objects can be real-world objects, such as mobile phones, playing cards, notebooks, or coffee mugs that can be placed on the surface.

The code in this project can help you create interactions between these fiduciary objects that are sensed on the surface. You can view the source code for this example project at http://www.manvesh.net/projects/ttt/tuioPatterns/srcview/. The project contains a singleton TUIO parser class, a PatternEventsManager class that dispatches custom pattern events, and the example test Sprite class, which uses one of these pattern events (ProximityEvent) to do something cool. Individually, here is the functionality of each class.

sTUIO – This is a singleton class that reads the XML Socket and parses through the TUIO protocol. It dispatches ObjEvent, which contains TUIO analogous data about the current list of objects placed on the surface, a list of objects removed, and the spatial details of each object.

PatternEventsManager – This class is instantiated with the IDs of each object placed on the surface. It listens to the ObjEvents and dispatches specific pattern events such as ZoomEvent, or ProximityEvent. ProximityEvent, for example, contains information about the distances between two objects, and ZoomEvent contains ratios defining the change in distance between two objects over consequent frames.

TUIO_Objects – This is the test class that shows how to use the above two classes to utilize the patterns.

At Synlab, we are building a growing library of patterns and gestures that you can use to define interactions and build exciting applications. If you are interested in contributing, e-mail me at manvesh.vyas@gatech.edu.