Emotion expression is an important aspect of human to human communication. Recognizing the emotional state of a person can help us better understand complex rethorical devices such as irony, understand the gravity of described situation and infer other information that is often not expressed as part of the verbal communication channel. With the growing popularity of integrated human-machine interfaces automatic emotion detection has a great potential to improve the way we interact with machines. Since camera sensors are being integrated into almost all devices, emotion recognition based on facial expression is one of the viable methods for widespread use. Several models performing emotion recognition based on sequence of frontal facial images were proposed and implemented in this thesis. Because emotion is a dynamic psychical state, three different types of temporal context information for recognition were examined and compared. To ensure usability with real-time streams a wrapper framework consuming one frame at the time is proposed. Both deep-learning based and conventional types of classifiers were implemented. The best performing model achieved accuracy of 95.1% on the CK+ dataset.

eng

dc.language.iso

CZE

dc.publisher

České vysoké učení technické v Praze. Vypočetní a informační centrum.

cze

dc.publisher

Czech Technical University in Prague. Computing and Information Centre.