Designing for interaction dan saffer pdf

Dan saffer designing for interaction pdf. I m not sure if Office for Mac is similar or not. The discussion below focuses on such interpretations of the youll have. Designing for Interaction, Second Edition: Creating Innovative Applications and Devices. Dan Saffer. New Riders. Eighth Street. Berkeley. [email protected] Oversaw design and product management to launch a new home robot, Kuri. Devised product Designing for Interaction: Creating.

A PDF form can collect data from a user and then send that data using email or the web. You can also create a new form by using Adobe Designer. Rather then trying to perform traditional state-trace analysis, user state-trace analysis focuses on the goal of the method. Plenty of visual case studies, expert advice, and bulleted lists of specific design tips. The UIM is collected under actual conditions or as close as possible to actual conditions and is then compared to is the PIM. Play stops if users feel trapped, powerless, or lost.

Interaction modeling is a good way to identify and locate usability issues with the use of a tool. Modeling techniques are prescriptive in that they aim to capture what users will likely do , and not descriptive of what users actually did.

Most methods—descriptive or prescriptive—fail to incorporate the relationship between user actions and cognitive processes. Models of cognitive processing, for example, might attempt to explain how or why a particular task is mentally difficult, yet the difficulty does not directly relate to observable user actions. Conversely, descriptive techniques such as path metrics, click stream analysis, and bread crumb tracking take little or no account of cognitive processes that lead to those user actions.

The relationship between actions and cognitive processes is important because it explains user behavior and translates to supportive arguments for good design solutions. Both prescriptive and descriptive techniques are necessary for characterizing the cognitive processing and user action cog-action relationship. Collecting and reporting user data without presenting cog-action relationships results in poor problem definitions. In fact, practitioners often present no relationship between observed user behavior and the claim that there is a problem.

Further, the relationship between an identified problem and the solution to fix it is often not provided.

These criteria should be expressed as assumptions about user processes and behaviors.

Creating PIM, or the process you would like the user to follow, specifies success criteria. Interaction models are a great tool for this endeavor and have enjoyed many published successes for good review of case studies see Fischer There are three important things to remember about PIMs:. Interaction models are typically quantitative frameworks requiring statistical validation. The idea is to establish the PIM as a type of hypothesis or intended goal of development: The method presented here is a structured approach for handling that hypothesis based on observation and theory.

Here is a simple example for retrieving a phone number from a contact database Fig. PIM for a phone number retrieval interface of a contact database application]. PIM entities are the decision diamond , cognitive process rounded box , action square box , and system signal circle.

The model ends with a special action entity represented by a rhombus shape. The PIM starts with a decision get? It also cues the modeler to encode a cognitive state either decision or mental process on either side of an action.

If multiple outcomes of a decision are necessary, consider using sub decisions. For example you might have a decision structure that looks like Figure 2. The granularity of the model detail can be determined based on the needs and constraints of the system. Perhaps a higher level model of the contact number example Fig. Models can be high level and need not articulate procedural, physical actions e. Frequently on projects, the PIM has been loosely established and exists in some unarticulated form.

The PIM can be difficult to construct from such varied sources. However, completing it makes assumptions about preferred interaction explicit and testable. Clearly defined, testable, assumptions are a necessity for this line of work. User state-trace analysis: The method results in many interesting metrics and affordances. Collecting data on the UIM is somewhat similar to state-trace analysis yet differs in important ways.

The UIM is collected under actual conditions or as close as possible to actual conditions and is then compared to is the PIM. Rather then trying to perform traditional state-trace analysis, user state-trace analysis focuses on the goal of the method. Here, we wish to capture qualitative behavioral data while observing users as they transition from cognitive states to action states.

Creating the UIM User state-trace analysis is a type of coding that allows a researcher to trace the path of behavioral and cognitive states the user exhibits while completing a task. This type of analysis has some caveats. First, real-time coding i. The user might transition into states that are not well defined in terms of the task e. The best practice is to video tape the study session and review it directly afterwards.

Expect upward of times the video session length to complete a full-blown, accurate state-trace. Although a trace can be completed in an hour or less, plenty of extra time is spent determining salient user actions, arguing interpretations, and refining the complete trace model. As is the case in most endeavors, the more decision makers involved equates to more time spent. If the task seems daunting, however, try restricting your level of trace detail to high level cognitive processes and actions and using the trace as an exploratory tool.

This approach will drastically speed up the process. When you start a trace diagram it is a good idea to use the PIM as a reference point for your coding. Have a printout of the PIM on a data collection sheet in order to take notes over top of it. Above all else, be honest while collecting data. Be aware that there is a tendency to establish an entire process-action-process relationship before writing anything down.

Instead, try to first recognize and label a few states and actions on your data collection sheet. The representation in Figure 4 was obtained from a usability study of an interaction modeling tool.

An analyst was asked to review a transcribed user study and assign models of decision making to various text passages. Obtaining measurements from a user state-trace can result in a valuable dataset that reveals interesting patterns and trends. User state-trace analysis, however, is not a means of drawing inferences from these patterns nor is it a method of interpretation. A user state-trace reveals how the user performed the tasks, not why. The architecture of processes and actions exhibited by the user is generated by a cognitive mechanism the user engaged to deal with the task they were given.

A better understanding of the underlying problem-solving and decision-making mechanisms will explain observed actions. These tend to fall into four basic classes Lovett An example of a rule-based mechanism is the satisficing model of decision making Lovett Fig.

In the following example the satisficing model is recruited to interpret departures observed between a PIM and the recorded UIM:.

The Satisficing model of decision making]. Example Scenario: Once a character is added, the application allows the user to count the frequency of appearance for a listed character. The PIM for adding a character to the database is illustrated in Figure 7.

The story file has already been loaded into the application. The Story Teller application interface with add character dialog]. PIM of adding a character to the database]. UIM obtained from a usability study session].

The UIM shows large departures. We might be tempted to explain this as a problem with labeling in the interface or poor task clarification during the study. We could instead employ the satisficing model to explain the departures for a more rich interpretation:.

The satisficing model is an example of a rule-based model yet assumes that the user is affected by cognitive bias. Common examples of cognitive-bias models are:. A good resource for the above models and a more detailed list can be found at the Wikipedia entry for decision making and problem solving Wikipedia It is a good practice to diagram several candidate cognitive-bias models before attempting to use them for explaining a specific departure.

The diagramming allows you to get specific about exactly how the model explains the observed departure between the PIM and UIM. Conclusion The interaction-modeling technique provided here is useful in establishing usability success criteria and uncovers usability issues. Major disparities observed between the PIM and UIM work as evidence to support the claim that there is a viable usability issue.

The use of cognitive decision and problem solving models PDMs helps interpret and explain why the disparities exist. The essential components of a viable usability claim, behavioral evidence, and theory driven interpretations will inform the creation and rationale for good user interface design solutions. Olson, J.

What Are Foundations? Let us consider three time related foundations of Interaction Design: Pace Interaction design is the creation of a narrative—one that changes with each individual experience with it, but still within constraints. Metaphor Metaphor is a literary device which uses one well-understood object or concept to represent with qualification another concept which would be much more difficult to explain otherwise.

Abstraction Working in tandem with metaphor, Abstraction relates more to the physical and mental activity that is necessary for an interaction to take place. Negative space All good design disciplines have a form of negative space. So what is the negative of interaction? Intersection in Interaction Unlike form-creating design disciplines, interaction design is very intricate in that it requires other design disciplines in order to communicate its whole.

Debunking common misconceptions about PDFs Misconception 1: Misconception 2: PDFs are only good for prototyping page-based applications. Misconception 3: Drag around the area you want to have a link to create a rectangle. This is the area in which the link is active. In the Create Link dialog box, choose the settings you want for the link appearance. To set the link action to link to another page, select Go To A Page View, click Next, set the page number and view magnification you want in the current document or in another document, and then click Set Link.

Creating Image Rollovers To create a rollover, you will need to have the image s you will be using already created and ready to insert into your PDF. Using the Button tool , drag across the area where you want the rollover to appear.

You have now created a button. While the Button tool is still selected, double-click the button you just created to open the button dialog box. In the button dialog box, click the Appearance tab. If needed, deselect Border Color and Fill Color. Next, click the Options tab and choose Icon Only from the Layout menu.

Choose Push from the Behavior menu, and then choose Rollover from the State list. Click Choose Icon, and then click Browse. Navigate to the location of the image file you want to use and then click OK to accept the previewed image as a button. Select the Hand tool and move the pointer across the button. The image field you defined appears as the pointer rolls over the button.

Drag or double-click to select the area on the page where you want the movie to appear. The Add Movie dialog box appears. Click OK, and then double-click the movie clip with the Hand tool to play the movie. Adding Forms A PDF form contains interactive form fields that the user can fill in using their computer.

On the Forms toolbar, select the form field type that you want to add to your PDF. Drag the cross-hair pointer to create a form field of the required size and the Properties dialog box for that form field will appear. References Lynda. Garrick Chow Adobe Acrobat 7. Though there are other prototyping tools out there, here are the main reasons I lean toward PowerPoint: Basic Interactivity To begin, create a simple PowerPoint mockup, each slide depicting a separate screen in your site or application.

Distributing Complexity in the Design Process Producing a simple design is challenging. The following examples suggest how design decisions can be resolved through analysis and user evaluation: Unique vs.

In software design, redundancy is inexpensive, but in product or environmental design, surface area and space are bound by more costly physical limitations. Therefore, the decision of whether to place a functional control in a single, fixed location versus multiple access points is critical. Providing multiple control access points can make things easier — for example, some cars offer redundant steering wheel controls for ongoing tasks such as changing radio volume and channel selection.

Task analysis methods, which can measure the frequency, sequence, and relationship amongst tasks, are essential for informing such decisions. Dedicated vs. Multi-Function In many cases, a single control will be asked to serve multiple functions.

Typical clock-radios have buttons that change based on mode. Compact clock-radios with multiple sets of incremental controls dedicated to different purposes are complex-looking, and often lead to errors of commission when users select the wrong type of control. Anecdotally, Sony has a clever design with a dedicated toggle for daylight savings mode.

Seemingly unnecessary relative to its usage, but placed discretely enough to not interfere with everyday use. Fixed vs. Adaptive Both digital and mechanical interfaces have the capability to conditionally hide or reveal controls and information. Interfaces that utilize this capability to support the user are referred to as adaptive. For example, a cardiac monitor may display several detailed biomedical parameters during normal cardiac functioning, but displace those in favor of limited, but high visibility critical information during alarm conditions.

Adaptive interfaces are among the most challenging to design properly as, by definition, they proactively alter the familiar. Ironically, the act of simplifying the user interface to support immediate conditions may complicate the experience for the user who must re-orient him or herself to the system.

Achieving a balance between disruption and benefit requires careful consideration, analysis, and testing. Eye tracking is a useful tool for assessing adaptive interfaces, enabling measurement of where users look during the re-orientation step when interfaces change, and determining appropriate display designs. This article presents a three-part method of interaction modeling where: A prescriptive, preferred interaction model PIM is created A descriptive user-interaction model UIM derived from an actual user study session is created A model of problem solving and decision making PDM is used to interpret disparities between the first two models Preferred Interaction Model PIM A usability study design establishes success criteria.

There are three important things to remember about PIMs: The PIM is created by the designer The PIM should be based on task requirements, not functional requirements The PIM exists in the system, not in the head of the user Interaction models are typically quantitative frameworks requiring statistical validation.

PIM for a phone number retrieval interface of a contact database application] PIM entities are the decision diamond , cognitive process rounded box , action square box , and system signal circle. Nesting decisions can allow a complex outcome choice A, B, or C, rather than just choice A or B] The granularity of the model detail can be determined based on the needs and constraints of the system.

Category: Interactivity

The user decides that if X situation arises then they will take Y action. Rule-based models are often engaged when the interface adheres to a dialog-driven, procedural task design. Examples are grocery store checkout systems, and operating system setup wizards. The user has been in this situation before. Last time they did X and it resulted in a desirable outcome, so they will do X again. Experience-based models are often engaged while performing tasks on systems users have familiarity with.

In usability studies, however, participants are frequently recruited based on the criteria of limited or no experience with a system. The user has seen a situation that appears to have all the same elements as the current situation. In the former situation, X resulted in a positive outcome, so they will do the analogous version of X here. Pattern-based models take surrogate roles for missing experience-based models.

The mechanism that handles the pattern-to-experience-based replacement can itself be expressed as a model. Expert decision making is often categorized as intuition based. In the following example the satisficing model is recruited to interpret departures observed between a PIM and the recorded UIM: The Satisficing model of decision making] Example Scenario: The Story Teller application interface with add character dialog] [Figure 7: PIM of adding a character to the database] [Figure 8: We could instead employ the satisficing model to explain the departures for a more rich interpretation: There is a problem with the current interface.

Large disparities between the PIM and state trace data i. The user may be adhering to a satisficing model of decision making. Therefore, the user continually adds the same character as a new entry to the database because the text field that allows the user to enter a character is the first available entry point to the data input process. The text field also signals the user to recruit an experience-based model of problem solving: If the list has the name of the character and a clearly identifiable number representing the current number of instances next to it, the user will be allowed to select the character name in the list box and press submit.

This will fix the problem of re-adding the same character in addition to allowing the user to include character aliases in the count e. Common examples of cognitive-bias models are: Anchoring and adjustment: Initial information presented to the user affects decisions more than information communicated to the user later in the task.

Selective perception: Users disregard information that they feel is not salient to completing the task. Underestimating uncertainty: Users tend to feel they can always recover from faulty decisions in an application.

References Olson, J. Select the text you want to hyperlink. Be sure to select the text itself, not just the box around the text. Click on the screen you want the hyperlink to lead to. Click OK. The text is now hyperlinked. View the PowerPoint as a slideshow to see it in action. Right-click on the image or button.

In , Elographics, backed by Siemens, created Accutouch, the first true touchscreen device. Ac- cutouch was basically a curved glass sensor that became increasingly refined over the next decade. VIDEO- PLACE which could be a wall or a desk was a system of projectors, video camer- as, and other hardware that enabled users to interact using a rich set of gestures without the use of special gloves, mice, or styli.

The sense of presence was such that users actually jumped back when their silhouette touched that of other users. Courtesy Matthias Weiss. The Flexible Machine Interface combined finger pressure with simple image processing to create some very basic picture draw- ing and other graphical manipulation. Outside academia, the s found touchscreens making their way to the public first as most new technology does in commercial and industrial use, particular- ly in point-of-sale POS devices in restaurants, bars, and retail environments.

Supervised by Professor K. A POS touchscreen. According to the National Restaurant Association, touchscreen POS systems pay for themselves in savings to the establishment.

The Hewlett-Packard was probably the first computer sold for personal use that incorporated touch. Users could touch the screen to position the cursor or select on-screen buttons, but the touch targets see later in this chapter were fairly primitive, allowing for only approximate positioning. Courtesy Hewlett-Packard. A diagram of the Digital Desk system.

More than a decade before Apple re- leased the iPhone and other handset manufacturers such as LG, Sony Erics- son, and Nokia released similar touch- screen phones as well , IBM and Bell South launched Simon, a touchscreen mobile phone.

It was ahead of its time and never caught on, but it demon- strated that a mobile touchscreen could be manufactured and sold. In the late s and the early s, touchscreens began to make their Figure Simon, released in , was the way into wide public use via retail ki- first mobile touchscreen device. It suffered from some serious design flaws, such as not being osks, public information displays, air- able to show more than a few keyboard keys port check-in services, transportation simultaneously, but it was a decade ahead of its ticketing systems, and new ATMs.

Courtesy IBM. Tactile Manipulation on a Desktop Display. Courtesy JetBlue and Antenna Design. A player controlled the game via a special glove that, as the player gestured physically, would be mimicked by a digital hand on-screen. The mids have simply seen the arrival of gestural interfaces for the mass market.

In , Nintendo released its Wii gaming system. In , to much ac- claim, Apple launched its iPhone and iPod Touch, which were the first touch- screen devices to receive widespread media attention and television advertising demonstrating their touchscreen capabilities.

In , handset manufacturers such as LG, Sony Ericsson, and Nokia released their own touchscreen mobile devices. Also in , Microsoft launched MS Surface, a large, table-like touch- screen that is used in commercial spaces for gaming and retail display. And Jeff Han now manufactures his giant touchscreens for government agencies and large media companies such as CNN.

What gives? Why have trips to the bathroom become visits to interaction design labs? There are a few possible reasons. The first is that bathrooms are places where bacteria live, and people have grown increasingly hesitant to touch things there. Thus, gestures allow us to interact with the environment without requiring much physical contact. The second reason is simply one of maintenance and conservation.

Bathrooms require a lot of upkeep, and having toilets automatically flush, paper towels dispense only a small amount, and sinks automatically shut off saves both maintenance and resources read: The third reason is one of human nature.

As Arthur C. And we may need magic in our bathrooms—they can be places of anxiety and discomfort. So, perhaps it makes perfect sense to have magic in our bathrooms. Whatever the reason, public restrooms are the new labs for interactive gestures. The future see Chapter 8 should be interesting.

The Mechanics of Touchscreens and Gestural Controllers Even though forms of gestural devices can vary wildly—from massive touch- screens to invisible overlays onto environments—every device or environment that employs gestures to control it has at least three general parts: These three parts can be a single physical com- ponent, or, more typically, multiple components of any gestural system, such as a motion detector a sensor , a computer the comparator , and a motor the actuator.

The basic components of any gestural system A sensor is typically an electrical or electronic component whose job is to de- tect changes in the environment. These changes can be any number of things, depending on the type of sensor, of which there are many. Pressure To detect whether something is being pressed or stood on. This is often me- chanical in nature. Light To detect the presence of light sources also called a photodetector.

This is used mostly in environments, especially in lighting systems.

Proximity To detect the presence of an object in space. This can be done in any number of ways, from infrared sensors to motion and acoustic sensors. Acoustic To detect the presence of sound. Typically, this is done with small micro- phones. Tilt To detect angle, slope, and elevation. Tilt sensors generate an artificial hori- zon and then measure the incline with respect to that horizon. Motion To detect movement and speed. Some common sensors use microwave or ultrasonic pulses that measure when a pulse bounces off a moving object which is how radar guns catch you speeding.

Introducing Interactive Gestures Orientation To detect position and direction. These are often used in navigation systems currently, but position within environments could become increasingly im- portant and would need to be captured by cameras, triangulating proximity sensors, or even GPSes in the case of large-scale use.

It is crucially important to calibrate the sensitivity of the sensor or the modera- tion of the comparator. A sensor that is too sensitive will trigger too often and, perhaps, too rapidly for humans to react to. A sensor that is too dull will not respond quickly enough, and the system will seem sluggish or nonresponsive. The size for touchscreens or coverage area of the sensor is also very important, as it determines what kinds of gestures broad or small, one or two hands, etc.

Often in more complex systems, multiple sensors will work together to allow for more nuanced movement and complicated gesture combinations. To have 3D gestures, multiple sensors are a necessity to get the correct depth. Many Apple products including Apple laptops have accelerometers to detect speed and mo- tion built into them, as do Wii controllers.

For orientation within an environ- ment or sophisticated detection of angles, other sensors need to be deployed.

Free e-book: Interaction Design Best Practices

Touch events are a combination of the sensor and the comparator, but the technology the system uses to detect a touch varies. In most modern touchscreen interfaces, the sensor is a touch-responsive glass panel that usually employs one of three technologies: Resistive systems are made up of two layers. When a user touches the top layer, the two layers press together, triggering a touch event.

Because of how resistive systems work, they require pres- sure and can measure it well , but multitouch does not work very well if at all. Surface wave systems generate ultrasonic waves. When a user touches the screen, a portion of the wave is absorbed, and that registers as a touch event. Capacitive sensor panels are coated with a material that stores electrical charge. Another method, particularly for large displays, incorporates infrared beams that skim the flat surface of a screen in a grid-like matrix.

Sensitivity is determined by how close the beams are to each other. Infrared light can also be used in frustrated total internal reflection FTIR and diffused il- lumination DI systems, which use infrared cameras to determine touch events, especially multitouch events.

The light reflects around the inside of the Plexiglas un- hindered. There are two different kinds of DI systems: With rear DI, infrared lights shine upward at a clear surface glass, acrylic, Plexiglas, etc. A diffuser a device that diffuses, spreads out, or scatters light in some manner to create a soft light is placed on top of or underneath the touch surface. When an object, such as a finger, touches the surface creating what are known as blobs , it reflects more light than the diffuser or objects in the background, and the camera below senses this extra light, creating a touch event.

Depending on the diffuser, rear DI can also detect objects hovering over the surface. Front DI is when the infrared light is projected on top of the surface from above i. As with rear DI, a diffuser used to soften the light is placed on the top or bottom of the surface. When an object, such as a finger, touches the surface, it makes a shadow in the position of the object that the camera positioned below detects and uses to determine a touch event.

Introducing Interactive Gestures Once a sensor detects its target, it passes the information on to what is known in systems theory as a comparator. The comparator compares the current state to the previous state or the goal of the system and then makes a judgment. For many gestural interfaces, the comparator is a microprocessor running software, which decides what to do about the data coming into the system via the sensor.

Only with a microprocessor can you design a system with much nuance, one that can make more sophisticated decisions. Those decisions get passed on to an actuator in the form of a command. Actua- tors can be analog or mechanical, similar to the way the machinery of The Clap- per turns lights on; or they can be digital, similar to the way tilting the iPhone changes its screen orientation from portrait to landscape. With mechanical sys- tems, the actuator is frequently a small electric motor that powers a physical object, such as a motor to open an automatic door.

It is software that determines what happens when a user touches the screen or extends an arm. They need to be designed. Designing Interactive Gestures: The Basics The design of any product or service should start with the needs of those who will use it, tempered by the constraints of the environment, technology, resources, and organizational goals, such as business objectives. The needs of users can range from simple I want to turn on a light to very complex I want to fall in love.

Most human experience lies between those two poles, I think. The first question that anyone designing a gestural interface should ask is: There are several reasons to not have a gestural interface: The Basics 17 Heavy data input Although some users adapt to touchscreen keyboards easily, a keyboard is de- cidedly faster for most people to use when they are entering text or numbers. Reliance on the visual Many gestural interfaces use visual feedback alone to indicate that an action has taken place such as a button being pressed.

In addition, most touch- screens and many gestural systems in general rely entirely on visual displays with little to no haptic affordances or feedback. There is often no physical feeling that a button has been pressed, for instance. If your users are visually impaired as most adults over a certain age are a gestural interface may not be appropriate. The inverse is also true: The keyboard on the iPhone, for instance, is en- tirely too small and delicate to be used by anyone whose fingers are large or otherwise not nimble.

Designers need to take into account the probable environment of use and determine what, if any, kind of gesture will work in that environment. There are, of course, many reasons to use a gestural interface. Everything that a noninteractive gesture can be used for—communication, manipulating ob- jects, using a tool, making music, and so on—can also be done using an interac- tive gesture. Gestural interfaces are particularly good for: More natural interactions Human beings are physical creatures; we like to interact directly with objects.

Interactive gestures allow users to interact natu- rally with digital objects in a physical way, like we do with physical objects. This benefit allows for gestural interfaces to be put in places where a traditional computer configuration would be impractical or out of place, such as in retail stores, museums, airports, and other public spaces.

Introducing Interactive Gestures Figure New York City in late installed touchscreens in the back seats of taxicabs. Although clunky, they allow for the display of interactive maps and contextual information that passengers might find useful, such as a Zagat restaurant guide. More flexibility As opposed to fixed, physical buttons, a touchscreen, like all digital displays, can change at will, allowing for many different configurations depending on functionality requirements.

Thus, a very small screen such as those on most consumer electronics devices or appliances can change buttons as needed. This can have usability issues see later in this chapter , but the ability to have many controls in a small space can be a huge asset for designers.

And with nontouchscreen gestures, the sky is the limit, space-wise. One small sensor, which can be nearly invisible, can detect enough input to control the system. No physical controls or even a screen are required. More nuance Keyboards, mice, trackballs, styli, and other input devices, although excel- lent for many situations, are simply not as able to convey as much subtlety as the human body.

A raised eyebrow, a wagging finger, or crossed arms can deliver a wealth of meaning in addition to controlling a tool. Gestural systems have not begun to completely tap the wide emotional palette of humans that they can, and likely will, eventually exploit. The Basics 19 More fun You can design a game in which users press a button and an on-screen ava- tar swings a tennis racket. But it is simply more entertaining—for both play- ers and observers—to mimic swinging a tennis racket physically and see the action mirrored on-screen.

Gestural systems encourage play and exploration of a system by providing a more hands-on sometimes literally hands-on experience. Once the decision has been made to have a gestural interface, the next ques- tion to answer is what kind of gestural interface it will be: As I write this, particularly with devices and appliances, the answer will be fairly easy: In the future, as an increasing variety of sensors are built into devices and environments, this may change, but for now touchscreens are the new standard for gestural interfaces.

Discoverable Being discoverable can be a major issue for gestural interfaces. How can you tell whether a screen is touchable? How can you tell whether an environ- ment is interactive? Before we can interact with a gestural system, we have to know one is there and how to begin to interact with it, which is where affordances come into play. An affordance is one or multiple properties of an object that give some indication of how to interact with that object or a feature on that object.

A button, because of how it moves, has an affordance of pushing. Toward an Ecological Psychology, R. Shaw and J. Bransford Eds. Lawrence Erlbaum: Without the tiny diagrams on the dispenser, there would be no affordances to let you know how to get the toilet paper out. Gestural interfaces need to be discoverable so that they can be used. Courtesy Yu Wei Products Company.

Users are also now suspicious of gestural interfaces and often an attraction affordance needs to be employed see Chapter 7. Thus, responsiveness is incredibly important. When engaged with a gestural interface, users want to know that the system has heard and understood any commands given to it. This is where feedback comes in.

Every action by a human directed toward a gestural interface, no matter how slight, should be accompanied by some acknowledgment of the action whenever possible and as rapidly as possible ms or less is ideal as it will feel instantaneous. Imagine if The Clapper picked up every slight sound and turned the lights on and off, on and off, over and over again!

But not having near-immediate feedback can cause errors, some of them poten- tially serious. The Basics 21 just performed, such as pushing a button again. Obviously, this can cause problems, such as accidentally buying an item twice or, if the button was connected to dangerous machinery, injury or death. If a response to an ac- tion is going to take significant time more than one second , feedback is required that lets the user know the system has heard the request and is doing something about it.

Progress bars are an excellent example of respon- sive feedback: Appropriate Gestural systems need to be appropriate to the culture, situation, and con- text they are in. Certain gestures are offensive in certain cultures. Meaningful The coolest interactive gesture in the world is empty unless it has mean- ing for the person performing it; which is to say, unless the gestural system meets the needs of those who use it, it is not a good system. Smart The devices we use have to do for us the things that we as humans have trouble doing—rapid computation, having infallible memories, detecting complicated patterns, and so forth.

They have to be smart. Clever Likewise, the best products predict the needs of their users and then fulfill those needs in unexpectedly pleasing ways.

Adaptive targets are one way to do this with gestural interfaces. Another way to be clever is through interac- tive gestures that match well the action the user is trying to perform. Playful One area in which interactive gestures excel is being playful.

Through play, users will not only start to engage with your interface—by trying it out to see how it works—but they will also explore new features and variations on their gestures. Users need to feel relaxed to engage in play.

Errors need to be difficult to make so that there is no need to put warning messages all over the interface. The ability to undo mistakes is also crucial for fostering the en- vironment for play.

Play stops if users feel trapped, powerless, or lost. Humans are more forgiving of mistakes in beautiful things. They should be pleasurable to use. This engenders good feelings in their users. Good Gestural interfaces should have respect and compassion for those who will use them. It is very easy to remove human dignity with interactive gestures— for instance, by making people perform a gesture that makes them appear foolish in public, or by making it so difficult to perform a gesture that only the young and healthy can ever perform it.

Designers and developers need to be responsible for the choices they make in their designs and ask them- selves whether it is good for users, good for those indirectly affected, good for the culture, and good for the environment. The choices that are made with gestural interfaces need to be deliberate and forward-thinking.

Every time users perform an interactive gesture, in an indirect way they are placing their trust in those who created it to have done their job ethically. The Attributes of Gestures Although touchscreen gestural interfaces differ slightly from free-form gestural interfaces, most gestures have similar characteristics that can be detected and thus designed for. The more sophisticated the interface and the more sensors it employs , the more of these attributes can be engaged: Presence This is the most basic of all attributes.

Something must be present to make a gesture in order to trigger an interaction. For some systems, especially in environments, a human being simply being present is enough to cause a re- action. For the simplest of touchscreens, the presence of a fingertip creates a touch event.

Duration All gestures take place over time and can be done quickly or slowly. Is the user tapping a button or holding it down for a long period?

Flicking the screen or sliding along it? For some interfaces, especially those that are simple, duration is less important. Interfaces using proximity sensors, for instance, care little for duration and only whether a human being is in the area. Position Where is the gesture being made? Some gestures also employ the z-axis of depth. For instance, a designer may want to put some gestures high in an environment so that children cannot en- gage in them. Motion Is the user moving from position to position or striking a pose in one place?

Is the motion fast or slow? Up and down, or side to side? Pressure Is the user pressing hard or gently on a touchscreen or pressure-sensitive device? This too has a wide range of sensitivity. You may want every slight touch to register, or only the firmest, or only an adult weight or only that of a child or pet. Pressure can also be faked by trying to detect an increasing spread of a finger pad: Size Width and height can also be combined to measure size.

For example, touch- screens can determine whether a user is employing a stylus or a finger based on size the tip of a stylus will be finer and adjust themselves accordingly.

Orientation What direction is the user or the device facing while the gesture is being made? For games and environments, this attribute is extremely important.

Orientation has to be determined using fixed points such as the angle of the user to the object itself. Including objects Some gestural interfaces allow users to employ physical objects alongside their bodies to enhance or engage the system.

Simple systems will treat these other objects as an extension of the human body, but more sophisticated ones will recognize objects and allow users to employ them in context.