Work together, Win together

Our Approach

In order to answer the research questions mentioned above, basic research is required in the fields of user behavior, multi-sensory environments and data structuring. Our approach will start with an assessment of Metaplan sessions (see Chapter 5) as an example of structured team meetings within an environment that comprises multiple surfaces, both horizontal and vertical as depicted in Figure 1. Based on this assessment, we expect answers on “what to capture”, which is the prerequisite for the question about “how to capture and interpret” the user interactions. Finally, the spatially distributed data needs to be reorganized (or ‘de-spatialized’) in order to answer the question “how to represent, display and synchronize” the views of sighted and blind participants. For altering the information by blind users, it is also important to address the question “how to browse and modify” the artifacts by state of the art and new ways of interaction, including in-air gesturing. The concepts will be developed in a user-centered design approach and evaluated with blind users in mixed teams. The following section will discuss this approach in more detail.

What to Capture

During a team session, various NVC elements are employed by the sighted users. Within this project, we will restrict ourselves to those NVC elements that occur in relation to an artifact or to an information cluster of the spatially distributed information. Although meaningful to sighted users, it is still yet unclear to which extend these NVCs are also relevant to blind users and how frequent they should be displayed to them. These questions will be answered by assessing several meetings of groups working in an environment comprising multiple interactive surfaces, and by some simulative tests with blind users. The main task is to find out which additional artifacts and NVC elements will occur in addition to regular pointing gestures, and their importance for sighted and blind users. Moreover, control gestures should be identified which are performed by the sighted users in order to focus, edit or modify information, e.g. grasping a card, placing it somewhere else, etc. Thus, NVC elements should be gathered in this part of the project, as well as control gestures for modifying spatially distributed information.

How to Capture and Interpret

Depending on the gestures and NVC elements that need to be captured by the system, the kind of sensors, their resolution, their amount and orientation, their temporal resolution, etc. have to be determined. Since information is spatially distributed on different information clusters (tables, whiteboards), in a loosely moderated Metaplan session users are allowed to freely change their position. Thus, there also will be the need for sensor fusion in order to avoid misinterpretations, but also to increase the reliability of gathered data. Moreover, sophisticated reasoning systems need to be researched that will allow interpreting complex nonverbal communication and their relation to one or more artifacts in order to increase the reliability of data capturing and the representation in the respective views.

How to Represent, Display and Synchronize Views

During the previous project, it was found that information should be output to blind users by using devices and methods she/he is already familiar with and later to expand to new interaction concepts, e.g. touch, in-air gestures, refreshable two-dimensional tactile displays, and 3D sound. Thus, the representation of the gathered spatially distributed information becomes a challenging task. How can a three-dimensional distribution of information be output in a sequential manner? How can the different views be kept consistent? Is new hardware required to output data to blind users? And further: How is it possible to precisely assign three-dimensional gestures and NVC elements to artifacts being displayed on the interactive surfaces?

How to Browse and Modify

Since blind users prefer to use their standard display methods, they should also be able to start from their standard interaction behavior (e.g. keyboard and simple touch gestures). New concepts and methods for interacting with spatially distributed information should be explored. Experiments should be made how blind users can use gestures to move the focus, to select, modify, add or delete artifacts. Gestures like pointing, pinching, grasping, throwing, moving and swiping will be explored with regard to their suitability for blind users. Concepts like on-body interaction and combined gestures will be researched. This should improve the way blind users can cope with the huge amount of information when representing spatially distributed information.

This project has received funding from the SNF, DFG and FWF Lead-Agency Procedure