Probability, programs, and the mind: Building structured Bayesian models of cognition

Noah Goodman, Stanford University

Joshua Tenenbaum, MIT

Abstract

Human thought is remarkably flexible: we can think about
infinitely many different situations despite uncertainty and novelty.
Probabilistic models of cognition (Chater2006) have been successful at explaining
a wide variety of learning and reasoning under uncertainty. They have borrowed
tools from statistics and machine learning to explain phenomena from perception
(Yuille2006) to language
(Chater2006a). Traditional symbolic models (e.g. Newell1958, Anderson1998), by
contrast, excel at explaining the productivity of thought, which follows from
compositionality of symbolic representations. Indeed, there has been a gradual
move toward more structured probabilistic models (Tenenbaum2011) that incorporate
aspects of symbolic methods into probabilistic modeling. Unfortunately this
movement has resulted in a complex "zoo" of Bayesian models. We have recently
introduced the idea that using programs, and particularly probabilistic programs,
as the representational substrate for probabilistic modeling tames this unruly
zoo, fully unifies probabilistic with symbolic approaches, and opens new
possibilities in cognitive modeling. The goal of this tutorial is to introduce
probabilistic models of cognition from the point of view of probabilistic
programming, both as a unifying idea for cognitive modeling and as a practical
tool.