For any library that invests in IGI Global's InfoSci-Books and/or InfoSci-Journals databases, IGI Global will match the library’s investment with a fund of equal value to go toward subsidizing the OA APCs for their faculty patrons when their work is submitted/accepted under OA into an IGI Global journal.

Subscribe to the Latest Research Through IGI Global's InfoSci-OnDemand Plus

InfoSci®-OnDemand Plus, a subscription-based service, provides researchers the ability to access full-text content from over 100,000+ peer-reviewed book chapters and 25,000+ scholarly journal articles that spans across 350+ topics in 11 core subjects. Users can select articles or chapters that meet their interests and gain access to the full content permanently in their personal online InfoSci-OnDemand Plus library.

Purchase the Encyclopedia of Information Science and Technology, Fourth Edition

and Receive Complimentary E-Books of Previous Editions

When ordering directly through IGI Global's Online Bookstore, receive the complimentary e-books for the first, second, and third editions with the purchase of the Encyclopedia of Information Science and Technology, Fourth Edition e-book.

Create a Free IGI Global Library Account to Receive a 25% Discount on All Purchases

Exclusive benefits include one-click shopping, flexible payment options, free COUNTER 5 reports and MARC records, and a 25% discount on single all titles, as well as the award-winning InfoSci®-Databases.

Abstract

A primary goal of virtual environments is to support natural, efficient, powerful and flexible human-computer interaction. But the traditional two-dimensional, keyboard- and mouse-oriented graphical user interface is not well-suited for virtual environments. The most popular approaches for capture, tracking and recognition of different modalities simultaneously to create intellectual human-computer interface for games will be considered in this chapter. Taking into account the large gesture variability and their important role in creating intuitive interfaces, the considered approaches focus one’s attention on gestures although the approaches may be used also for other modalities. The considered approaches are user independent and do not require large learning samples.

Introduction

A primary goal of virtual environments is to support natural, efficient, powerful, and flexible human-computer interaction. If the interaction technology is awkward, or constraining, the user’s experience with the synthetic environment is severely degraded. If the interaction itself draws attention to the technology, rather than the task at hand, it becomes an obstacle to a successful virtual environment experience.

The traditional two-dimensional, keyboard- and mouse-oriented graphical user interface (GUI) is not well-suited for virtual environments. Instead, synthetic environments provide the opportunity to utilize several different sensing modalities and integrate them into the user experience. The cross product of communication modalities and sensing devices begets a wide range of unimodal and multimodal interface techniques. The potential of these techniques to support natural and powerful interfaces is the future of game constructing and designing.

To more fully support natural communication, it has to not only track human movement, but to interpret that movement in order to recognize semantically meaningful gestures. While tracking user’s head position or hand configuration may be quite useful for directly controlling objects or inputting parameters, because people naturally express communicative acts through higher-level constructs such as gesture or speech.

In this chapter, we shall consider the most popular approaches for capture, tracking and recognition of different modalities simultaneously to create intellectual human-computer interface for games. Taking into account the large gesture variability and their important role in creating intuitive interfaces, the considered approaches focus one's attention on gestures although the approaches may be used also for other modalities. The considered approaches are user independent and do not require large learning samples.

In section 2 of the chapter, games based on computer vision will be considered. Games are classified in terms of their content. Also, gesture modalities will be analyzed as natural and artificial gestures.

Before object recognition (human gesture or facial expression), the object has to be captured in video stream. Modern capture and tracking methods are included in section 3 of the chapter.

If an object has been captured as a digital image, it can be recognized using some mathematical recognition models. Section 4 of the chapter is devoted to the most effective recognition models.

Multimodal aggregation as a way to an intellectual human-computer interaction is presented in section 5 of the chapter.