You are here

Human voice and eyes are the most natural sources that can provide input signals to facilitate hands-free human-computer interaction. However, the applicability is subjective to the factors of user preference, context, accuracy and the performance of individual modality. In this work, we aim to neutralize these factors for a true hands-free experience: integration of the two modalities for a wholesome Web browsing experience. We will demonstrate the prototype at the 2018 ACM symposium on eye tracking (ETRA) in Warsaw, June 2018. A video demonstration of the developed framework is already available. The work is integral part of GazeTheWeb and MAMEM project, to support individuals suffering from loss of voluntary muscular control.