Tedium-reducing editor services frequently encounter, and balk at, incomplete programs: programs with holes and type inconsistencies. This paper starts from type-theoretic first principles to develop a type system for incomplete programs. It then defines a calculus of type-aware structured edit actions, and proves, using the Agda proof assistant, several powerful theorems about this calculus.

This work lays the foundation for Hazel, the typed lab notebook environment that we are building. Our vision for Hazel is described in the SNAPL 2017 paper below.

Programmable Semantic Fragments: The Design and Implementation of typy

Sometimes, reducing tedium requires defining a new semantic fragment. Consider, for example, adding functional record update to ML, or defining a typed FFI to a different language, like OpenCL.
This paper develops a simple system for modularly defining new semantic fragments like these and shows that this system can be embedded into Python as a library, typy.

In this paper, we describe and empirically evaluate an editor service that allows library providers to associate code generation UIs with type definitions. Clients invoke these UIs via the type-directed code completion menu (our implementation is in Eclipse.)

I used to be a computational neurobiologist. This paper reports some initially surprising correlations in data recorded from excitatory and inhibitory cells in the rodent sensory cortex, and develops an elegant non-linear model that explains these correlations.

A Feedback Information-Theoretic Approach to the Design of Brain-Computer Interfaces

As an undergraduate, I studied and built brain-computer interfaces (BCIs). This paper shows how to design an information-theoretically optimal brain-computer interface, and reports the results of some of our experiments with an EEG-based BCI.

This paper introduces regular string types, which classify strings known statically to be in a given regular language. We implemented this type system as a semantic fragment using the system described in our GPCE 2016 paper.

This paper compares scientific model validation to unit testing, and suggests that scientists might benefit from cyberinfrastructure that generates tables that summarize the extent to which known models fit known datasets. My collaborator, Rick Gerkin, continues to push this idea forward in the domain of neuroscience.

Editor services often generate multiple possible code suggestions. We are developing a model that uses both the semantic structure of the current program and statistics gathered from a code corpus to assign a statistical likelihood to every valid suggestion.