My work focuses on sentence comprehension, from perspectives of cognitive neuroscience and natural language processing. I have two primary threads of research, serving scientific (cognitive) and engineering (NLP) goals, respectively.

In my cognitive research, I work on computational models to simulate response measures like the N400 and P600, for the purpose of fleshing out and testing hypotheses about the mechanisms underlying human sentence comprehension. One theme of this work has been drawing on current models and techniques from NLP, and applying these to the investigation of cognitive questions.

In my NLP research, my interest is in improving models' capacity for sentence composition. For this purpose I am currently developing a system for analyzing the meaning content captured by sentence-level vector representations (“embeddings”) produced by NLP models. The method that I use is inspired by the decoding / multi-voxel pattern analysis method for analyzing information encoded in brain activations. This approach allows us to probe for particular types of meaning information despite the opacity of vector representations, with the goal of using this analysis to improve the quality of sentence encoders.