Latest revision as of 13:57, 19 January 2012

Contents

The first practical steps of augmenting human capability through a close coupling of man and machine have their origins in Ivan Sutherland’s work at MIT and the University of Utah and in work by the generation of students Sutherland and his colleague, David Evans, trained at the University of Utah. Having launched the field of interactive computer-aided design in his dissertation project, Sketchpad, between 1965-1968 Sutherland pursued an ambitious project to create what he called “the ultimate display,” an augmented reality system in which computer generated images of all sorts could be overlaid on scenes viewed through a head-mounted camera display system. Among the visionary suggestions Sutherland made in this early work was that interaction with the computer need not be based on keyboard or joystick linkages but could be controlled through computer-based sensing of the positions of almost any of our body muscles; and going further, he noted that while gestural control through hands and arms were obvious choices, machines to sense and interpret eye motion data could and would be built. “An interesting experiment, he claimed, “will be to make the display presentation depend on where we look.” Sutherland’s work inspired Scott Fisher, Brenda Laurel, and Jaron Lanier, the inventors of the dataglove and first virtual reality and telepresence systems at NASA-Ames Research Center, and Tom Furness at Wright-Patterson Air Force Base in Ohio, who developed his own version of the ultimate display, based on eye and gesture tracking as a quasi “Darth-Vader Helmet” and integrated virtual cockpit. Furness was trying to solve problems of how humans interact with very complex machines, particularly the new high-tech F-16, F-14 and F-18 fighter planes, which were becoming so complicated that the amount of information a fighter pilot had to assimilate from the cockpit's instruments and command communications had become overwhelming. Furness’ solution was a cockpit that fed 3-D sensory information directly to the pilot, who could then fly by nodding and pointing his way through a simulated landscape below. (more...)