A key goal of our laboratory is to understand the sequence of assembly of low-level circuits of the auditory system, focused on the brainstem and its activation by the auditory nerve. We have developed a novel whole-head brain slice preparation to pursue this goal. We have revealed that the auditory nerve fibers can drive neurons of the medial nucleus of the trapezoid body by E17.5, ~1 day before birth. A main focus of our work is to characterize and explain neuronal and glial structural dynamics during formation of the calyx of Held, the largest nerve terminal in the mammalian brain. We also examine detailed structure of nerve terminals using electron tomography.

We also measure changes in gene expression profiles, in order to identify molecular signaling pathways that mediate changing tissue dynamics during neural circuit formation. We collaborate with Henrique von Gersdorff, Vollum Institute, on structure/function study of neurotransmission.

The second major focus of our work is to define and understand parallel processing pathways of the auditory brainstem. We begin by reconstructing the neurons and glia and their functional connections at nanoscale resolution using serial blockface scanning electron microscopy (SBEM) to image tissue volumes. Cells are reconstructed volumetrically, using manual and semi-automated techniques. Synapses are identified and exported for graph theoretical analysis, and neurons are transformed into a file format for simulation studies using NEURON. We collaborate with Eddie Fuller and CQ Zhang, WVU, for graph theory, Gianfranco Doretto, WVU, for computer vision and cell classification, Mark Culp, WVU, for multi-view statistical analysis, Mark Ellisman, UCSD, for SBEM imaging and with Paul Manis, UNC Chapel Hill, for neuron modeling.

New Tools for Neuroscience

We are developing technologies to visualize brain structures at cellular and sub cellular resolution using immersive virtual reality. Our current technology, syGlass (www.syglass.io), is a scientific data visualization and annotation system that works seamlessly with commercial VR technologies like the HTC Vive and Oculus Rift. We recently licensed syGlass from West Virginia University and began selling copies of the software. Our goal is to offer these powerful tools at a popular price so that every laboratory can have their own copy. With syGlass, high-resolution polygon meshes, 4D movies, and volumetric imaging data can be ingested quickly and easily, producing immersive VR renderings that provide new insight into data of all shapes and sizes. The syGlass project began as a collaboration between Drs. Spirou and Gianfranco Doretto (CSEE faculty), and computer science graduate student Michael Morehead, and led to the formation of the technology start-up company IstoVisio, Inc.