Knowledge of the eye-gaze position of a subject may allow machines to interact with humans in a more intuitive and natural fashion. The goal of this thesis is to develop a minimally restrictive eye-gaze tracking system using a single camera, that is 3D model based, uses multiple glint light sources and that can re-acquire the eye position rapidly after head movements. The implemented system estimates the gaze position on a computer screen of a subject, solely by tracking features in images of their face and eye. The system is capable of estimating the point of gaze independent of head position with an update rate of 15 Hz. The processing time required for a full sized image frame is 110 ms which reduces to 28 ms when the region of interest, used to reduce the size of image to be processed, is locked on the eye. A delay of 3 frames before reprocessing the full image was added to avoid losing lock due to eye blinks. After determining the region of interest has lost the eye due to translation, the system is able to reacquire the eye within 110 ms. A novel eye-glass reflection compensation algorithm allows people who wear corrective lenses to use the system. Insensitivity to ambient lighting conditions is achieved through infrared system lighting and optical filters. The system was tested with the eye located in six different positions resulting in average accuracies ranging from 0.54° to 1.1° of visual angle. The system was also tested on twelve different subjects of various ethnicities, gender and eye-glass use, with an average error 0.87° and an average maximum error of 1.85°. The system we have developed is a fully functional eye-gaze tracker which is based on a single camera, compensates for reflections off eye-glasses, and handles varying ambient lighting conditions. Our current system may be used as a testbed for further eye-gaze tracking improvements and as a platform for developing eye-gaze aware applications.