The basic goal of our research is to investigate how humans learn and
reason, and how intelligent machines might emulate them. In tasks that
arise both in childhood (e.g., perceptual learning and language acquisition)
and in adulthood (e.g., action understanding and analogical inference),
humans often paradoxically succeed in making inferences from inadequate
data. The data available are often sparse (very few examples), ambiguous
(multiple possible interpretations), and noisy (low signal-to-noise ratio).
How can an intelligent system cope?

We approach this basic question as it arises in both perception and
higher cognition. Our research is highly interdisciplinary, integrating
theories and methods from psychology, statistics, computer vision, machine
learning, and computational neuroscience. The unified picture emerging
from our work is that the power of human inference depends on two basic
principles. First, people exploit generic priors — tacit general
assumptions about the way the world works, which guide learning and inference
from observed data. Second, people have a capacity to generate and manipulate
structured representations— representations organized around distinct
roles, such as multiple joints in motion with respect to one another
in action perception. Our current areas of active study include action
understanding, motion perception, object recognition, causal learning,
and analogical reasoning. Below are a few examples of recent and ongoing
research projects.

Action understanding

The ultimate goal of perception and cognition is to enable effective
interactions with the external environment. Three key questions that
connect action perception to understanding and reasoning are: (1) what
information do humans use efficiently for different action related tasks,
such as identifying individual actors, or searching for a fighting person
in a crowd? (2) How do humans efficiently categorize different types
of actions? (3) How do individuals interact with other people? We integrate
modern modeling approaches with behavioral experiments to investigate
how humans form categories for different types of actions, and how knowledge
of one action type can facilitate inferences about other action types
through analogical mapping. We aim to bridge perception and causal reasoning
to investigate how humans understand social activities by perceiving and reasoning
about actions involving interacting agents.

Motion perception

We study motion perception for cases ranging from simple translational
motion, to complex radial/circular motion, to the sophisticated biological
motion on which human actions are based. The integration of psychophysical
experiments with computational models assists us in understanding (1)
how humans represent different motion patterns as structural complexity
increases; (2) the limitations of human motion processing; (3) the strategies
the human visual system employs to complete different tasks (e.g., motion
segmentation, motion grouping, and tracking over time).

Object recognition

What are the salient and invariant features used by the human visual
system for object recognition? We address this question using synthesized
images. Generative models are used to produce synthesized images that
are visually realistic but with well-controlled stimulus properties.
We use these stimuli to study how human recognition performance is affected
by adding or deleting feature sets in the image.

Causal learning

We formulate causal learning within a Bayesian framework to differentiate
two fundamental questions: (1) What likelihood functions do humans use in causal
inference? (2) What prior knowledge do humans assume? A successful inference
model incorporating a theory of both likelihoods and priors should provide
coherent answers to a variety of causal queries and experimental designs. Our
research aims to develop Bayesian models for a range of experiments on causal
learning, and to assess the validity of computational models by comparing their
predictions with human performance.

Analogical reasoning

We study analogical reasoning from a computational perspective. Specific
research questions include (1) What is a normative model for analogical
reasoning, and how does it compare to actual human performance? (2) How
are relational representations constructed from non-relational inputs,
and how are they matched in analogical reasoning?