Abstract

A great deal of scientific evidence suggests that there is a close relationship between mood and cognitive processes of human in everyday tasks. In this study, we have investigated the feasibility of determining mood from gaze, which is one of the human cognitive processes that can be recorded during interaction with computers. To do so, we have designed a feature vector composed of typical gaze patterns, and piloted the approach on the dataset which we gathered. It consists of 145 samples of 30 people. A supervised machine learning technique was employed for classification and recognition of mood. The results of this pilot test suggests that even during these initial steps, the approach is quite promising and opens other research paths for improvement through multi-modal recognition and information fusion. Multi-modal approach would employ the added information provided by our previously developed mood extraction approach using camera and/or the information gained by the use of EEG signals. Further analysis will be performed in feature extraction process to enhance the model accuracy by enriching the feature-set of each modality.

Keywords

Notes

Acknowledgements

The first author would like to thank members of Advanced Robotics and Intelligent Systems lab, Social Networks lab, and Mobile Robot lab for their endless helps and participation in the experiment. This work was supported by NPRP grant 09- 310-1– 058, and grant 7-684-1-127 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibly of the authors.

References

1.

Thayer, R.E.: The Biopsychology of Mood and Arousal. Oxford University Press, New York (1989)Google Scholar