the story of a Masters Thesis for Interaction Design in Malmö Sweden

Apple Trackpad (example)

When I have talked with people about my “stacking” concept, the new Apple Trackpad pops up in conversation right away. It seems to be an obvious real-world example of stacking interfaces into a single device. I understand why people think of this device right away because it is Apple’s daring design decision that has provoked people to think “Do I really need a separate button?”. The Apple Trackpad is almost a larger version of the SUI Cube I am now testing. It is a large button that is touch sensitive on the top! More investigation is necessary though…

Apple Trackpad

The lower layer of interaction in the trackpad is the button level. When you press down, it is the same result as if you clicked a mouse or tapped the trackpad: a mouse-click event is triggered. Since a press of the trackpad and a tap on the trackpad are both registered as the same event, it could be argued that this button level could be removed from the trackpad with no degradation of interaction. It is my thought that the button level was kept in the design to assist in those few occasions when only using a tap can make an interaction more difficult than using a button. Some interactions are more efficient and manually possible through the addition of physicality, instead of attempting to combine a string of touch motions that virtually create the same result.

Two ways of dragging an icon across the desktop (a simplified example):

Move finger to place cursor on the icon, press down with left finger, use right finger to drag cursor across screen.

Or, move finger to place cursor on the icon, double-tap finger but hold the finger down on the second tap, move finger to drag cursor across the screen, tap finger again to release the double-tap-hold.

The button level of the trackpad provides access to a single mouse event, click.

Apple Trackpad: button level

The upper level of the Trackpad provides a plethora of interaction that goes beyond the assumed abilities of a typical trackpad. For full disclosure, I must admit that I do not have an Apple Trackpad and am only stating functionality from what I have witnessed or seen online.

All typical interactions are possible: tap, double-tap, drag. Apple expands on these interactions, though, through the use of a multi-touch trackpad: “right-click” when tap with second finger when first finger is already touching, two-finger scroll for a webpage, four-finger up to hide all open applications, four-finger down to show a small icon of each open application, pinch two fingers to zoom, rotate two fingers to rotate an image, hold your thumb down while moving your finger to perform a click-drag, press with two fingers for a “right-click”, three fingers swiped to the left or right to move back or forward while surfing the web, four fingers swiped left or right to open the Application Switcher… and on top of all that… you can customize all of these gestures through a preferences pane in the control panel!

Apple Trackpad: touch level

The Apple Trackpad is not a SUI because it does not contain multiple interfaces. This was not an immediate verdict though. If the definition of “multiple interfaces” were stretched thinly, the answer could almost be “Yes” to the multiple interfaces question.

The top layer provides cursor movement control, cursor event commands, and access to a limited set of gesture commands.

The bottom layer provides cursor event commands, too.

Where the definition of “multiple interfaces” could be stretched, and I think where most people are convinced that it is a SUI, is how gestures are incorporated into the overall functionality of the trackpad. Without the gestures, the Apple Trackpad would without a doubt not be a SUI because of the absence of opposed interfaces. The button and touch-sensitivity are both controlling the same interface. With the inclusion of gestures in the top layer of interactions, the line that separates the interfaces becomes blurred, and the definition of “interface” becomes uncertain.

From my inexperienced understanding of the Apple Trackpad gestures, they are simply shortcuts or “quick-keys” to functionality that is attainable by other means. The gesture is providing the user with a quicker way of performing the function by allowing the user to maintain their hand position and not move their arm to press a button, or by eliminating multiple cursor movements and click commands. Because of this, I do not consider the gestures to be an interface. I do admit, though, that more investigation is necessary…

True, but there is an important difference as I see it, even if subtle.

In the case you use two hands, one for pointing and one for clicking, or just one hand with the thumb doing the clinking and the index finger doing the pointing, the Apple trackpad is NOT experienced any different from a conventional trackpad, and thus not a SUI. Because the user’s actions simply mimic having a conventional trackpad with separate buttons.

However, with single-finger operation, the button-action and the pointing-action melt into one single action. In daily use, this makes the interaction far more natural. Whatever my finger happens to be hovering over, if I want to affect it, I just push. Just like in the real world.

It is hard to come up with a quick good analogy, but think of this example:

Say you want to slide a piece of paper across your desk. In real life you just press your finger down and slide it around. Simple and straight forward. This is the way the Apple trackpad is experienced in daily use.

However, if the real world worked like a classic trackpad with separate buttons, it would work like this: Although you would be able to slide your finger over the piece of paper, the only way you could make it move would be to push the handle on your desk drawer with your other hand.

The technological difference is small, but the difference in experience is very important in this case

Information

SUIs are multiple inputs, combined or stacked on each other, which allow you enhanced usability while still being able to use each input independently. This site is the story of my Master's Thesis to prove this framework.