Multimodal Perception Lab

The Multimodal Perception lab focuses on human-centered sensing and multimodal signal processing methods to observe, measure, and model human behavior. These methods are used in applications that facilitate behavioral training, and surveillance; and enable human-robot interactions (HRI). The focus is mainly on vision and audio modalities. Probabilistic graphical models and Neural networks form the backbone of the underlying formalism.