Reverse-Engineering Human Visual and Haptic Perceptual Algorithms

Feryal M P Behbahani

Abstract

Intelligent behaviour is fundamentally tied to the ability of the brain to make decisions in uncertain and dynamic environments. To accomplish this task successfully, the brain needs to categorise novel stimuli in real-time. In neuroscience, the generative framework of Bayesian Decision Theory has emerged as a principled way to predict how the brain has to act in the face of uncertainty. We sought to investigate whether the brain also uses generative Bayesian principles to implement its categorisation strategy. To this end, by adopting tools from machine learning as a quantitative framework, we reverse-engineered a novel experimental paradigm which allowed us to directly test this hypothesis in a variety of visual object categorisation tasks. In addition, our results provide new implications for existing models of human category learning and provide an ideal experimental paradigm for neurophysiological and functional imaging investigations of the underlying neural mechanisms involved in object recognition. We also turn to the problem of haptic object recognition by building on the belief that its underlying algorithms should resemble that of vision. Accordingly, we present a Bayesian ideal observer model for human haptic perception and object reconstruction which infers from contact point information on the surface of the hand and noisy hand proprioception, simultaneously the shape of the object together with an estimation of the true hand pose in space. We implement this theory using a recursive Bayesian estimation algorithm, inspired by the simultaneous localisation and mapping (SLAM) methods in robotics, which can operate on experimental data from human subjects as well as computer-based physical simulations. Our work enables the principled study of the haptic perception of complex objects and scenes in a similar principled manner that transformed research in the field of vision. Moreover, in conjunction with tactile-enabled prostheses, our model could allow for online object recognition and pose adaptation for more natural prosthetic control.