The limitations of today's eyetrackers have become more obvious, and over the last couple of years I have been motivated to develop my own custom eyetracking software. In fact, my purpose for going to NYU's Interactive Telecommunications Program was to develop programming skills in an environment receptive to rapid prototyping and interface design. To cope with real-time image processing required for eyetracking, I knew that the software had to be written using efficient, low-level, languages such as C,C++, and Objective C.

Why the Mac?

With the recent release of Apple's latest operating system, developing software for the Macintosh is particularly exciting. In the last six years Apple has been an excellent product design role-model. Many of the innovations driving desktop computing seem to originate from their products. Since transitioning from OS 9 to the BSD Unix core, OS X has emerged as one of the most modern operating systems on the market. Impressively, Apple continues to expand the desktop computing experience. Further, the tools necessary to develop applications for the Mac (i.e. Xcode and Interface Builder) are bundled with the operating system so it has been easy to start programming applications right out of the box.
The major reason for developing eyetracking software for the Mac is to target people (probably in the creative and artistic industry) who may not want to use Windows or Linux-based systems. Many of the tools developed by Apple target users in the artistic industry and I believe that an eyetracking tool for this user-base might offer new ways of creative expression, or might stimulate new reasons for doing eye movement research.

Figure 1.1: Video of Yarbus 1.0, eyetracking software developed for Mac OS.
The software does real-time (~25-30 Hz) image processing on the eye image, and also captures a reference scene image to superimpose
fixation crosshairs. Link to Youtube video.

Figure 1.2: The user can use the mouse to drag (and select) the optimal region of interest (shown in green). Sliders allow adjustments of the pupil and corneal reflection thresholds levels.
The application also runs as a mini-server where the eye x,y coordinates are passed as a two-byte value. Other appications can make a TCP/IP connection to obtain eye position.

I am a big supporter of an open-source eyetracking system. The idea got started while I was at the Rochester Institute of Technology studying at the Visual Perception Laboratory. Along with Jeff Pelz, I helped developed an ISCAN compatible eyetracking headgear from off-the-shelf components. For more details go to the eyetracking hardware page or read the PDF:

During the summer of 2004 I joined Derrick Parkhurst and David Winfield for a Summer Research Fellowship at the Human Computer Vision Laboratory (HCVL) under Iowa State University's HCI department. Our research agenda was to develop an open-hardware and open-software digital image eye-tracking system for Linux. This research focused on three goals. The first goal was to migrate away from analog video capture devices to a fully digital eye-tracking system. The second goal was to develop methods on how to build eye-tracking hardware from standardized off-the-shelf components. The final goal was to develop open source eye-tracking software geared toward mobile applications and novel human computer interfaces. With the help of David Winfield, an undergraduate research student at ISU, these goals were met, and by the end of the summer we demonstrated a mobile, open-source, eye-tracking system using Intel's open source computer vision libraries (OpenCV). The system required only a laptop and two off-the-shelf FireWire webcameras. The project at Iowa State proved successful, and continues to move forward as part of the OpenEyes project.

Figure 2.1: Screen grab of cveyetracker, eyetracking software developed at Iowa State University, for Linux, using intel's OpenCV libraries.

The Applied Vision Research Unit at the University of Derby recently collected eye movement data from 5,638 observers looking at paintings on exhibit at the National Gallery in London. This exhibition is the world's largest eye tracking experiment and has generated so much data that researchers were faced with the problem of trying to visualize subjects' fixation data beyond conventional statistics, such as fixation duration and number of fixations. Wooding (2002) has presented this data in the form of 3-D fixation maps which represent the observer's regions of interest as a spatial map of peaks and valleys. Part of my RIT Master's thesis expanded on Wooding's visualization techniques to include a suite of Matlab tools aimed at plotting 3-D fixation surfaces over the 2-D image that was viewed.

Fixation distribution across multiple observers (with blinks and saccade intervals removed) is converted into a 2D histogram (1 pixel bin size) where the height of the histogram represents the frequency of fixation samples for a particular spatial location. Because the number of pixels covered by the fovea varies as a function of viewing distance, the data is smoothed with a Gaussian convolution filter whose shape and size is determined by the pixels per degree for a display at a given viewing distance.

Figure 3.1: The bottom two images show relative fixation distibution across 13 observers. The red areas in the heat map image indicate peak areas of attention. The same data is plotted as 3D peaks.