Speaking in Hands: Winner of Round 1 of the CHALEARN Kinect Gesture Challenge

We catch up with Alfonso Nieto-Castanon, the winner of Round 1 of the CHALEARN Gesture Challenge. This fascinating series of 4 competitions revolves around gesture and sign language recognition using a Microsoft Kinect camera.A must-read for anyone planning to throw their hat in the ring for CHALEARN Round 2.

What was your background prior to entering this challenge?

My background is on computational neuroscience (Ph.D. Cognitive and Neural Systems, Boston University) and engineering (B.S./M.S. Telecommunication Engineering, Universidad de Valladolid). I work freelance as a research consultant and my latest projects range from development of functional connectivity MRI software and analysis methods, to brain computer interfaces for speech restoration in subjects with locked-in syndrome.

What made you decide to enter?

The Chalearn dataset and goals were too interesting to pass up. I just had to give it a try.

What preprocessing and supervised learning methods did you use?

I did not implement any learning strategy but used instead a combination of ad hoc features from the depth videos (somewhat inspired by neural processes in the visual system) with a Bayesian network model for recognition.

What was your most important insight into the data?

Thinking of gestures as a form of communication, and realizing that the subjects in those videos were already doing what they thought would work best in order for us to interpret and recognize those gestures correctly. I imagined that a system that would mimic the specificities of the human visual system would be most likely to pick up those helpful cues from the video sequences correctly.

Which tools did you use?

I used Matlab (just the matlab base set, no specific toolboxes other than the nice set of functions provided by the contest organizers to browse the data and create a sample submission)

What have you taken away from this competition?

I enjoy developing problem-specific algorithms rather than using a combination of off-the-shelf procedures. This contest gave me the chance to do just that while working in one of those (few) areas where humans still outperform machines (and I am curious to see if we can further bridge that gap!)