Research Overview

I am interested in understanding the mechanisms of human learning, memory, and categorization so that we may model how people represent knowledge, with two linked goals: 1) I am interested in understanding how people learn about and (mis)represent the world, so that 2) we learn to create machines with intelligence more like our own. Because I am interested in cross-cutting areas that are often studied separately, my work falls in the various subareas of categorical perception, semantic models, associative learning, language learning, memory, and decision-making. My dissertation focused on the problem of language acquisition: how do learning and memory mechanisms allow us to bootstrap meaning from the statistics of our language environment? Recently, I have been eagerly applying cognitive psychology knowledge and models to create adaptive educational software, which I am excited to use as a research platform for active, self-directed learning--even as it is used to improve literacy and numeracy! Check out an introduction to egoTeach.

Neural Network Structure and Input Injection Topology

A project with Richard Veale (that we spawned in a course taught by Olaf Sporns) investigating the effects of network structure and input topology on the computational power of biologically-inspired spiking neural networks (i.e., reservoirs/liquid state machines).

Dynamic Games for Memory, Learning, and Categorization

Using a game paradigm allows us to record moment-to-moment decisions during a trial--including changes of mind. Thus, we can do both traditional correctness and response time analyses, as well as trajectory analyses to better distinguish decision-making models and multiple process accounts. By varying the speed of the falling objects, we can also adjust the deadline to respond. Also, games are fun: our participants often thank us for making a fun experiment--and want to know if their score was good!

Information Aggregation: A Bayesian Cue Imputation Model

A project with Jennifer Trueblood and John Kruschke investigating how decision-makers (DMs) combine advice from expert advisers when the DMs know how much unique evidence (i.e., medical tests) each adviser has access to, but not the test results each adviser saw. If advisers agree that some outcome is likely, but are using different information, then a rational DM will extremify, combining the independent evidence to make a stronger rating. If one adviser sees a superset of the test results that another adviser sees, a rational DM should just match the rating of the adviser with greater knowledge. If advisers have unique knowledge and give different advice, a rational DM will compromise. We offer a Bayesian information aggregation model that produces these normative behaviors, and find that it works really well--at least for good learners in our task! A journal publication is forthcoming.

Expanding on the question of how word-object pair frequency affects learning, I ran a few experiments in which I gave learners four blocks of the same cross-situational training and testing. These learning trajectories for pairs of different frequency span roughly 30 minutes, and provide models with nuanced details to describe. Large individual differences--evident even after the first block of training--are a source of variance we hope to explain using the trajectory data. A publication is forthcoming.

What's in a name? Pinning down verbal labels.

Work with Caitlin Fausey, Drew Hendrickson, and Rob Goldstone about learning and transfer effects of verbal labels. A model is being born!