The Future Interfaces Group is an interdisciplinary research lab within the Human-Computer Interaction Institute at Carnegie Mellon University. We create new sensing and interface technologies that foster powerful and delightful interactions between humans and computers. These efforts often lie in emerging use modalities, such as wearable computing, augmented reality, smart environments and gestural interfaces.

Desktopography (2017)

Systems for providing mixed physical-virtual interaction on desktop surfaces have been proposed for decades, though no such systems have achieved widespread use. One major factor contributing to this lack of acceptance may be that these systems are not designed for the variety and complexity of actual work surfaces, which are often in flux and cluttered with physical objects. In this project, we use an elicitation study and interviews to synthesize a list of ten interactive behaviors that desk-bound, digital interfaces should implement to support responsive cohabitation with physical objects. As a proof of concept, we implemented these interactive behaviors in a working augmented desk system, demonstrating their imminent feasibility.

Synthetic Sensors (2017)

The promise of smart environments and the Internet of Things (IoT) relies on robust sensing of diverse environmental facets. In this work, we explore the notion of general-purpose sensing, wherein a single, highly capable sensor can indirectly monitor a large context, without direct instrumentation of objects. Further, through what we call Synthetic Sensors, we can virtualize raw sensor data into actionable feeds, whilst simultaneously mitigating immediate privacy issues. We deployed our system across many months and environments, the results of which show the versatility, accuracy and potential of this approach.

Electrick (2017)

Electrick is a low-cost and versatile sensing technique that enables touch input on a wide variety of objects and surfaces, whether small or large, flat or irregular. This is achieved by using electric field tomography in concert with an electrically conductive material, which can be easily and cheaply added to objects and surfaces. We show that our technique is compatible with commonplace manufacturing methods, such as spray/brush coating, vacuum forming, and casting/molding – enabling a wide range of possible uses and outputs. Published at CHI 2017.

Deus EM Machina (2017)

Homes, offices and many other environments will be increasingly saturated with connected, computational appliances, forming the “Internet of Things” (IoT). At present, most of these devices rely on mechanical inputs, webpages, or smartphone apps for control. We propose an approach where users simply tap a smartphone to an appliance to discover and rapidly utilize contextual functionality. To achieve this, our prototype smartphone recognizes physical contact with uninstrumented appliances, and summons appliance-specific interfaces and contextually relevant functionality. Published at CHI 2017.

DIRECT (2016)

Many research systems have demonstrated that depth cameras, combined with projectors for output, can turn nearly any reasonably flat surface into an ad hoc, touch-sensitive display. However, even with the latest generation of depth cameras, it has been difficult to obtain sufficient sensing fidelity across a table-sized surface to get much beyond a proof-of-concept demonstration. In this research, we present DIRECT, a novel touch-tracking algorithm that merges depth and infrared imagery captured by a commodity sensor. Our results show that our technique boosts touch detection accuracy by 15% and reduces positional error by 55% compared to the next best-performing technique in the literature.

CapCam (2016)

We present CapCam, a novel technique that enables smartphones (and similar devices) to establish quick, ad-hoc connections with a host touchscreen device, simply by pressing a device to the screen’s surface. Pairing data, used to bootstrap a conventional wireless connection, is transmitted optically to the phone’s rear camera. This approach utilizes the near-ubiquitous rear camera on smart devices, making it applicable to a wide range of devices, both new and old. CapCam also tracks phones’ physical positions on the host capacitive touchscreen without any instrumentation, enabling a wide range of targeted and spatial interactions.

ViBand (2016)

Smartwatches and wearables are unique in that they reside on the body, presenting great potential for always-available input and interaction. Additionally, their position on the wrist makes them ideal for capturing bio-acoustic signals. We developed a custom smartwatch kernel that boosts the sampling rate of a smartwatch’s existing accelerometer, enabling many new applications. For example, we can use bio-acoustic data to classify hand gestures such as flicks, claps, scratches, and taps. Bio-acoustic sensing can also detect the vibrations of grasped mechanical or motor-powered objects, enabling object recognition. Finally, we can generate structured vibrations using a transducer, and show that data can be transmitted through the human body.

AuraSense (2016)

AuraSense enables rich, around-device, smartwatch interactions using electric field sensing. To explore how this sensing approach could enhance smartwatch interactions, we considered different antenna configurations and how they could enable useful interaction modalities. We identified four configurations that can support six well-known modalities of particular interest and utility, including gestures above the watchface and touchscreen-like finger tracking on the skin. We quantify the feasibility of these input modalities in a series of user studies, which suggest that AuraSense can be low latency and robust across both users and environments.

Tomo 2 (2016)

Electrical Impedance Tomography (EIT) can be used to detect hand gestures using an instrumented smartwatch (see Tomo), demonstrating great promise for non-invasive, high accuracy recognition of gestures for interactive control. In Tomo 2, we introduce a new system that offers improved sampling speed and resolution. This, in turn, enables superior interior reconstruction and gesture recognition. More importantly, we use our new system as a vehicle for experimentation – we compare two EIT sensing methods and three different electrode resolutions. Results from in-depth empirical evaluations and a user study shed light on the future feasibility of EIT for sensing human input.

SkinTrack (2016)

SkinTrack enables continuous touch tracking on the skin. It consists of a ring, which emits a continuous high frequency AC signal, and a sensing wristband with multiple electrodes. SkinTrack measures phase differences to compute a 2D finger touch coordinate. Our approach is compact, non-invasive, low-cost and low-powered. We envision the technology being integrated into future smartwatches, supporting rich touch interactions beyond the confines of the small touchscreen.

Zhang, Y., Zhou, J., Laput, G., Harrison, C. SkinTrack: Using the Body as an Electrical Waveguide for Continuous Finger Tracking on the SkinIn Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, New York.

SweepSense (2016)

We use speakers and microphones already present in a wide variety of devices to open new sensing opportunities. Our technique sweeps through a range of inaudible frequencies and measures the intensity of reflected sound to deduce information about the immediate environment, chiefly the materials and geometry of proximate surfaces. We offer several example uses, two of which we implemented as self-contained demos.

FingerPose (2015)

A new method that estimates a finger’s angle relative to the screen. Our approach works in tandem with conventional multitouch finger tracking, offering two additional analog degrees of freedom for a single touch point. We prototyped our solution on two platforms— a smartphone and smartwatch—each fully self-contained and operating in real-time.

Gaze+Gesture (2015)

By fusing gaze and gesture into a unified and fluid interaction modality, we can enable rapid, precise and expressive free-space interactions that mirror natural use. Although both approaches are independently poor for pointing tasks, combining them can achieve pointing performance superior to either method alone. This opens new interaction opportunities for gaze and gesture systems alike.

EM-Sense (2015)

A sensing technology that allows a smartwatch to know what object the user is touching. When the user operates an electrical or electro-mechanical object, the electro-magnetic signals (EM) propagate through the user. These characteristic signals flow through the user and detected by the watch, which can be used for on-touch object detection.

Tomo (2015)

Tomo is a wearable, low-cost system using Electrical Impedance Tomography (EIT) to recover the interior impedance geometry of a user’s arm. We ultimately envision this technique being integrated into future smartwatches, allowing hand gestures and direct touch manipulation to work synergistically to support interactive tasks on small screens.

3D-Printed Hair (2015)

A technique for 3D printing hair, fibers and bristles, by exploiting the stringing phenomena inherent in 3D printers using fused deposition modeling. This technique extends the capabilities of 3D printing in a new and interesting way, without requiring any new hardware

Zensors (2015)

Zensors is a new sensing approach that fuses real-time human intelligence from online crowd workers with automatic approaches to provide robust, adaptive, and readily deployable intelligent sensors. With Zensors, users can go from question to live sensor feed in less than 60 seconds. Through our API, Zensors can enable a variety of rich end-user applications and moves us closer to the vision of responsive, intelligent environments.

Acoustruments (2015)

Acoustruments are low-cost, passive, and powerless mechanisms, made from plastic, that can bring tangible functionality to handheld devices. The operational principles were inspired by wind instruments, which produce expressive musical output despite being simple in physical design. Through a structured exploration, we built an expansive vocabulary of design primitives, providing building blocks for the construction of tangible interfaces utilizing smartphonesʼ existing audio functionality (the speaker and microphone).

Skin Buttons (2014)

Tiny projectors integrated into the smartwatch to render icons on the user’s skin. These icons can be made touch sensitive, significantly expanding the interactive region without increasing device size. Through a series of experiments, we show that these “skin buttons” can have high touch accuracy and recognizability, while being low cost and power-efficient.

Air+Touch (2014)

Air+Touch is a new class of interactions that interweave touch events with in-air gestures, offering a unified input modality with expressiveness greater than each input modality alone. We demonstrate how air and touch are highly complementary: touch is used to designate targets and segment in-air gestures, while in-air gestures add expressivity to touch events.

Toffee (2014)

Toffee is a sensing approach that extends touch interaction beyond the small confines of a mobile device and onto ad hoc adjacent surfaces, most notably tabletops. This is achieved using a novel application of acoustic time differences of arrival (TDOA) correlation. This enables radial interactions in an area many times larger than a mobile device.

TouchTools (2014)

We propose that touch gesture design be inspired by the manipulation of physical tools from the real world. In this way, we can leverage user familiarity and fluency with such tools to build a rich set of gestures for touch interaction. With only a few minutes of training on a proof-of-concept system, users were able to summon a variety of virtual tools by replicating their corresponding real-world grasps.

Smartwatch 5DOF (2014)

We propose using the face of a smartwatch as a multi-degree-of-freedom mechanical interface. This enables rich interaction without occluding the screen with fingers, and can operate in concert with touch interaction and physical buttons. We developed a series of example applications, many of which are cumbersome – or even impossible – on today’s smartwatch devices.

TapSense (2011)

TapSense enables touchscreens to know how users are touching the screen - with finger tips, knuckles and nails, or even a passive stylus. TapSense moves beyond just counting the number of fingers on the screen, revolutionizing the way we interact with touch-enabled devices. By distinguishing between different parts of the hand, TapSense takes the pain out of performing these actions, making mobile devices faster and easier to use than ever.

TeslaTouch (2010)

TeslaTouch brings rich, dynamic physical feedback to otherwise flat, featureless touchscreens. The technology is based on the electrovibration principle, which can programmatically vary the electrostatic friction between fingers and a touch panel. When combined with an interactive graphical display, this approach enables touch experiences with rich textures and physical affordances.