Researchers working in the lab of Carnegie Mellon University neuroscientist Aryn Gittis have identified two groups of neurons that can be turned on and off to alleviate the movement-related symptoms of Parkinson's disease. The activation of these cells in the basal ganglia relieves symptoms much longer than current therapies, like deep brain stimulation and pharmaceuticals.
The study, completed in a mouse model of Parkinson's, used optogenetics to better understand the neural circuitry involved in Parkinson's disease, and could provide the basis for new experimental treatment protocols. The findings, published by researchers from Carnegie Mellon, the University of Pittsburgh and the joint CMU/Pitt Center for the Neural Basis of Cognition (CNBC) are available as an Advance Online Publication on Nature Neuroscience's website.
Parkinson's disease is caused when the dopamine neurons that feed into the brain's basal ganglia die and cause the basal ganglia to stop working, preventing the body from initiating voluntary movement. The basal ganglia is the main clinical target for treating Parkinson's disease, but currently used therapies do not offer long-term solutions.
"A major limitation of Parkinson's disease treatments is that they provide transient relief of symptoms. Symptoms can return rapidly if a drug dose is missed or if deep brain stimulation is discontinued," said Gittis, assistant professor of biological sciences in the Mellon College of Science and member of Carnegie Mellon's BrainHub neuroscience initiative and the CNBC. "There is no existing therapeutic strategy for long lasting relief of movement disorders associated with Parkinson's."
To better understand how the neurons in the basal ganglia behave in Parkinson's, Gittis and colleagues looked at the inner circuitry of the basal ganglia. They chose to study one of the structures that makes up that region of the brain, a nucleus called the external globus pallidus (GPe). The GPe is known to contribute to suppressing motor pathways in the basal ganglia, but little is known about the individual types of neurons present in the GPe, their role in Parkinson's disease or their therapeutic potential.
The research group used optogenetics, a technique that turns genetically tagged cells on and off with light. They targeted two cell types in a mouse model for Parkinson's disease: PV-GPe neurons and Lhx6-GPe neurons. They found that by elevating the activity of PV-GPe neurons over the activity of the Lhx6-GPe neurons, they were able to stop aberrant neuronal behavior in the basal ganglia and restore movement in the mouse model for at least four hours — significantly longer than current treatments.
While optogenetics is used only in animal models, Gittis said she believes their findings could create a new, more effective deep brain stimulation protocol.

By using optogenetics to control neurons in the basal ganglia, researchers achieve effects that last longer than deep brain stimulation
Researchers working in the lab of Carnegie Mellon University neuroscientist Aryn Gittis, have identified two groups of neurons that can be turned on and off to alleviate the movement-related symptoms of Parkinson's disease. The activation of these cells in the basal ganglia relieves symptoms for much longer than current therapies, like deep brain stimulation and pharmaceuticals.
The study, completed in a mouse model of Parkinson's, used optogenetics to better understand the neural circuitry involved in Parkinson's disease, and could provide the basis for new experimental treatment protocols. The findings, published by researchers from Carnegie Mellon, the University of Pittsburgh and the joint CMU/Pitt Center for the Neural Basis of Cognition (CNBC) are available as an Advance Online Publication on Nature Neuroscience's website.
Parkinson's disease is caused when the dopamine neurons that feed into the brain's basal ganglia die and cause the basal ganglia to stop working, preventing the body from initiating voluntary movement. The basal ganglia is the main clinical target for treating Parkinson's disease, but currently used therapies do not offer long-term solutions.
"A major limitation of Parkinson's disease treatments is that they provide transient relief of symptoms. Symptoms can return rapidly if a drug dose is missed or if deep brain stimulation is discontinued," said Gittis, assistant professor of biological sciences in the Mellon College of Science and member of Carnegie Mellon's BrainHub neuroscience initiative and the CNBC. "There is no existing therapeutic strategy for long lasting relief of movement disorders associated with Parkinson's."
To better understand how the neurons in the basal ganglia behave in Parkinson's, Gittis and colleagues looked at the inner circuitry of the basal ganglia. They chose to study one of the structures that makes up that region of the brain, a nucleus called the external globus pallidus (GPe). The GPe is known to contribute to suppressing motor pathways in the basal ganglia, but little is known about the individual types of neurons present in the GPe, their role in Parkinson's disease or their therapeutic potential.
The research group used optogenetics, a technique that turns genetically tagged cells on and off with light. They targeted two cell types in a mouse model for Parkinson's disease: PV-GPe neurons and Lhx6-GPe neurons. They found that by elevating the activity of PV-GPe neurons over the activity of the Lhx6-GPe neurons, they were able to stop aberrant neuronal behavior in the basal ganglia and restore movement in the mouse model for at least four hours -- significantly longer than current treatments.
While optogenetics is used only in animal models, Gittis said she believes their findings could create a new, more effective deep brain stimulation protocol.
Co-authors of the study include: Kevin Mastro, University of Pittsburgh Center for Neuroscience; Kevin Zitelli and Amanda Willard, Carnegie Mellon Department of Biological Sciences and CNBC; and Kimberly Leblanc and Alexxai Kravitz, National Institute of Diabetes and Digestive and Kidney Diseases.
The research was funded by the National Institutes of Health (NIH) (NS090745-01, NS093944-01, NS076524), the National Science Foundation (DMS 1516288), the Brain & Behavior Research Foundation (formerly NARSAD), the Parkinson's Disease Foundation and the NIH Intramural Research Program. The authors also acknowledge the support of Carnegie Mellon's Disruptive Health Technology Institute.

A study conducted at Carnegie Mellon University investigated the brain’s neural activity during learned behavior and found that the brain makes mistakes because it applies incorrect inner beliefs, or internal models, about how the world works. The research suggests that when the brain makes a mistake, it actually thinks that it is making the correct decision—its neural signals are consistent with its inner beliefs, but not with what is happening in the real world.
“Our brains are constantly trying to predict how the world works. We do this by building internal models through experience and learning when we interact with the world,” said Steven Chase, an assistant professor in the Department of Biomedical Engineering and the Center for the Neural Basis of Cognition. “However, it has not yet been possible to track how these internal models affect instant-by-instant behavioral decisions.”
The researchers conducted an experiment using a brain-machine interface, a device that allows the brain to control a computer cursor using thought alone. By studying the brain’s activity, the researchers could see how the brain thinks an action should be performed. The researchers report that the majority of errors made were caused by a mismatch between the subjects’ internal models and reality. In addition, they found that internal models realigned to better match reality during the course of learning. “To our knowledge, this is the most detailed representation of a brain’s inner beliefs that has been identified to date,” said Byron Yu, an associate professor in the Department of Electrical and Computer Engineering and the Department of Biomedical Engineering.
The results from this study have wide-reaching applications. Notably, the results have the potential to improve the performance and reliability of current brain-machine interfaces that assist paralyzed patients and amputees. On a more fundamental level, the results can inform our understanding of how the brain learns: for example, how we acquire knowledge or develop new skills. Because the study allows for a better understanding of why the brain makes mistakes, the results also can be a powerful tool to improve how we learn to perform new tasks. “For example, a doctor may be trying to learn how to use a new robotic surgical device,” explains Matthew Golub, postdoctoral fellow in the Department of Electrical and Computer Engineering. “If you can take a snapshot of how the doctor thinks the device works, you can identify mismatch in his or her internal model and more efficiently train the doctor to use the device.”
The study, which was published in eLife, was conducted as part of Carnegie Mellon’s BrainHub, a university initiative that focuses on how the structure and activity of the brain give rise to complex behaviors. The team included Golub, Yu, and Chase. Research funding was provided by The National Institute of Child Health and Human Development, the PA Department of Health Research, and the National Science Foundation Integrative Graduate Education and Research Traineeship (IGERT) program.

Carnegie Mellon University is embarking on a five-year, $12 million research effort to reverse-engineer the brain, seeking to unlock the secrets of neural circuitry and the brain’s learning methods. Researchers will use these insights to make computers think more like humans.
The research project, led by Tai Sing Lee, professor in the Computer Science Department and the Center for the Neural Basis of Cognition (CNBC), is funded by the Intelligence Advanced Research Projects Activity (IARPA) through its Machine Intelligence from Cortical Networks (MICrONS) research program. MICrONS is advancing President Barack Obama’s BRAIN Initiative to revolutionize the understanding of the human brain.
“MICrONS is similar in design and scope to the Human Genome Project, which first sequenced and mapped all human genes,” Lee said. “Its impact will likely be long-lasting and promises to be a game changer in neuroscience and artificial intelligence.”
Lee will work with co-principal investigators Sandra Kuhlman, assistant professor of biological sciences at Carnegie Mellon and the CNBC, and Alan Yuille, the Bloomberg Distinguished Professor of Cognitive Science and Computer Science at Johns Hopkins University, to discover the principles and rules the brain’s visual system uses to process information. This deeper understanding could serve as a springboard to revolutionize machine learning algorithms and computer vision.
In particular, the researchers seek to improve the performance of neural networks — computational models for artificial intelligence inspired by the central nervous systems of animals. Interest in “neural nets,” which initially peaked in the 1990s, has recently undergone a resurgence thanks to growing computational power and datasets. Neural nets now are used in a wide variety of applications in which computers can learn to recognize faces, understand speech and handwriting, make decisions for self-driving cars, perform automated trading and detect financial fraud.
“But today’s neural nets use algorithms that were essentially developed in the early 1980s,” Lee said. “Powerful as they are, they still aren’t nearly as efficient or powerful as those used by the human brain. For instance, to learn to recognize an object, a computer might need to be shown thousands of labeled examples and taught in a supervised manner, while a person would require only a handful and might not need supervision.”
Artificial neural nets process information in one direction, from input nodes to output nodes. But the brain likely works in quite a different way. Neurons in the brain are highly interconnected, suggesting possible feedback loops at each processing step. What these connections are doing computationally is a mystery; solving that mystery could enable the design of more capable neural nets.
To better understand these connections, Kuhlman will use a technique called “2-photon calcium imaging microscopy” to record signaling of tens of thousands of individual neurons in mice as they process visual information, an unprecedented feat. In the past, only a single neuron, or tens of neurons, typically have been sampled in an experiment, she noted.
“By incorporating molecular sensors to monitor neural activity in combination with sophisticated optical methods, it is now possible to simultaneously track the neural dynamics of most, if not all, of the neurons within a brain region,” Kuhlman said. “As a result we will produce a massive dataset that will give us a detailed picture of how neurons in one region of the visual cortex behave.”
Meanwhile, the CMU-led team will collaborate with another MICrONS team at the Wyss Institute for Biologically Inspired Engineering, led by George Church, professor of genetics at Harvard Medical School. The Harvard-led team, working with investigators at Cold Spring Harbor Laboratory, MIT and Columbia University, is developing revolutionary techniques to reconstruct the complete circuitry of the neurons recorded at CMU. The database, along with two other databases contributed by other MICrONS teams, unprecedented in scale, will be made publicly available for research groups all over the world.
In this MICrONS project, CMU researchers and their collaborators in other universities will use these massive databases to evaluate a number of computational and learning models as they improve their understanding of the brain’s computational principles and reverse-engineer the data to build better computer algorithms for learning and pattern recognition.
“The hope is that this knowledge will lead to the development of a new generation of machine learning algorithms that will allow AI machines to learn without supervision and from a few examples, which are hallmarks of human intelligence,” Lee said.
“Extracting the brain’s secret algorithms in learning and inference from this massive amount of data to advance machine learning is extremely ambitious and might be the most uncertain part of this project,” said Andrew Moore, dean of CMU’s School of Computer Science. “It’s the equivalent of a moonshot, but CMU is one of the world’s best places to do this, because we have a very strong tradition and community in artificial intelligence. We also have a strong community of theoretical and experimental neuroscientists working with the Center for the Neural Basis of Cognition and the university’s BrainHub initiative.”
The CNBC is a collaborative center between Carnegie Mellon and the University of Pittsburgh. BrainHub is a neuroscience research initiative that brings together the university’s strengths in biology, computer science, psychology, statistics and engineering to foster research on understanding how the structure and activity of the brain give rise to complex behaviors.
In addition to Kuhlman and Yuille, the MICrONS team includes Abhinav Gupta, assistant professor of robotics; Gary Miller, professor of computer science; Rob Kass, professor of statistics and machine learning and interim co-director of the CNBC; Byron Yu, associate professor of electrical and computer engineering and biomedical engineering and the CNBC; and Steve Chase, assistant professor of biomedical engineering and the CNBC. Another member is Ruslan Salakhutdinov, one of the co-creators of the deep belief network, a new model of machine learning that was inspired by recurrent connections in the brain. Salakhutdinov will join CMU as an assistant professor of machine learning in the fall.
Other members of the team include Brent Doiron, associate professor of mathematics at Pitt, and Spencer Smith, assistant professor of neuroscience and neuro-engineering at the University of North Carolina.

Networks of model neurons with balanced recurrent excitation and inhibition capture the irregular and asynchronous spiking activity reported in cortex. While mean-field theories of spatially homogeneous balanced networks are well understood, a mean-field analysis of spatially heterogeneous balanced networks has not been fully developed. We extend the analysis of balanced networks to include a connection probability that depends on the spatial separation between neurons. In the continuum limit, we derive that stable, balanced firing rate solutions require that the spatial spread of external inputs be broader than that of recurrent excitation, which in turn must be broader than or equal to that of recurrent inhibition. Notably, this implies that network models with broad recurrent inhibition are inconsistent with the balanced state. For finite size networks, we investigate the pattern-forming dynamics arising when balanced conditions are not satisfied. Our study highlights the new challenges that balanced networks pose for the spatiotemporal dynamics of complex systems.

This could be the most touchy-feely robotic limb yet. For the first time, brain stimulation has made it possible for a paralysed person to experience the sensation of touch via a bionic hand.
Robert Gaunt at the Center for the Neural Basis of Cognition in Pittsburgh, Pennsylvania, and his team achieved this by implanting electrodes in the brain of Nathan Copeland, a 28-year-old quadriplegic.
These were inserted into the region of the brain that registers touch from the hand, and linked to a robotic hand in the same room via a computer. When this robotic hand was touched, it triggered stimulation of Copeland’s brain. “He feels these sensations coming from his own paralysed hand,” says Gaunt.
When blindfolded, Copeland could correctly tell which of the robotic hand’s fingers Gaunt was touching 80 per cent of the time.
This is the first time someone has had electrodes implanted in their somatosensory cortex, the part of the brain that registers touch. Previous work has focused on the motor cortex instead, enabling paralysed people to make bionic arms move using their thoughts – for example, to drink a cup of coffee.
“The procedure evoked almost natural sensations at some very precise locations in the hand,” says Takafumi Yanagisawa, at Osaka University in Japan, who has used brain-computer interfaces to enable paralysed people to move objects using a prosthetic hand. “The sensory information should be beneficial for patients to manipulate something with the hand more precisely,” he says.
Gaunt now hopes to combine thought-controlled movement with an artificial sense of touch. The ability to feel objects is a vital component of being able to grip and manipulate them smoothly. “If you make the fingertips numb, people become very clumsy,” says Gaunt.
Copeland already has a set of electrodes in his motor cortex that enable him to move objects, so the stage is now set for Gaunt’s team to try linking this to the feeling of touch. But one obstacle is that Copeland did not sense any feeling in his thumb or fingertips – both of which are key for gripping. Gaunt says this is due to the position of the electrode arrays, which they now know were implanted in an area corresponding to the base of Copeland’s fingers, not their tips.
However Gaunt is still hopeful that this new feeling of touch will help Copeland grip objects using prosthetic hands. It’s possible that neurons in his brain that correspond to his fingertips might be able to move or extend to where the electrodes are.
Read more: Brain implant allows paralysed man to sip a beer at his own pace

Carnegie Mellon University researchers have developed a new approach to broadly survey learning-related changes in synapse properties. In a study published in the Journal of Neuroscience and featured on the journal's cover, the researchers used machine-learning algorithms to analyze thousands of images from the cerebral cortex. This allowed them to identify synapses from an entire cortical region, revealing unanticipated information about how synaptic properties change during development and learning. The study is one of the largest electron microscopy studies ever carried out, evaluating more subjects and more images than prior researchers have attempted.
As the brain learns and responds to sensory stimuli, its neurons make connections with one another. These connections, called synapses, facilitate neuronal communication, and their anatomic and electrophysiological properties contain information vital to understanding how the brain behaves in health and disease. Researchers use different techniques, including electron microscopy, to identify and analyze synapse properties. While electron microscopy can be a useful tool for reconstructing neural circuits, it is also data and labor intensive. As a result, researchers have only been able to use it to study small, targeted areas of the brain until now.
Studying a large section of the brain using traditional electron microscopy techniques would result in terabytes of unwieldy data, given that the brain has billions of neurons, each with hundreds to thousands of synaptic connections. The new technique developed at Carnegie Mellon simplifies this problem by combining a specialized staining process with machine learning.
"Instead of getting perfect information from a tiny part of the brain, we can now get lower-resolution information from a huge region of the brain," said Alison Barth, professor of biological sciences and interim director of Carnegie Mellon's BrainHub neuroscience initiative. "This could be a great tool to see how disease progresses, or how drug treatments alter or restore synaptic connections."
This research is the latest example of how researchers with Carnegie Mellon's BrainHub research initiative are combining their expertise in biology and computer science to create new tools to advance neuroscience. The technique uses a special chemical preparation that deeply stains the synapses in a sample of brain tissue. When the tissue is imaged using an electron microscope, only the synapses can be seen, creating an image that can be easily classified by a computer program. Researchers then use machine learning algorithms to identify and compare synapse properties across a column of the cerebral cortex.
To test the effectiveness of their technique, the researchers, led by Santosh Chandrasekaran, examined how synapses across a complex circuit, composed of hundreds of interconnected neurons, would change with altered somatosensory input. In the past, Barth has used this model to study how neurons behave and synapses form in both learning and development. But traditional techniques only allowed her to look at neurons in a very small area of the neocortex.
"It was like looking for the perfect gift, but only going to one store. We might have been able to find something at that first location, but it was always possible that we might find something else - maybe even something better - at another place," said Barth, who is a member of the joint Carnegie Mellon/University of Pittsburgh Center for the Neural Basis of Cognition (CNBC). "This new technique allows us to look across all six layers of the neocortex, and to see how synapses across different parts of the circuit change together."
The researchers analyzed close to 25,000 images and 40,000 synapses, exponentially more than they were ever able to look at before using traditional methods. They found that the technique could be used to determine increases in synapse density and size during development and learning. Most notably, they found that synapse properties changed in a coordinated way across the entire region of the neocortex examined.
"Some of the cortical layers we saw were most affected have never been examined systematically before," explains Barth. "We've got a lot of great leads to follow up on."
The researchers are now beginning to use this data to develop new hypotheses about how synapses are organized in the neocortex in response to sensory input.

Ever see something that isn't really there? Could your mind be playing tricks on you? The "tricks" might be your brain reacting to feedback between neurons in different parts of the visual system, according to a study published in The Journal of Neuroscience by Carnegie Mellon University Assistant Professor of Biological Sciences Sandra J. Kuhlman and colleagues.
Understanding this feedback system could provide new insight into the visual system's neuronal circuitry and could have further implications for understanding how the brain interprets and understands sensory stimuli.
Many optical illusions make you see something that's not there. Take the Kanizsa triangle: when you place three Pac-Man-like wedges in the right spot, you see a triangle, even though the edges of the triangle aren't drawn.
"We see with both our brain and our eyes. Your brain is making inferences that allow you to see the triangle. It's connecting the dots between the corners of the wedges," said Kuhlman, who is a member of Carnegie Mellon's BrainHub neuroscience initiative and the joint Carnegie Mellon/University of Pittsburgh Center for the Neural Basis of Cognition (CNBC). "Optical illusions illustrate some of the amazing things our visual system can do."
When we look at an object, information about what we see travels through circuits of neurons beginning in the retina, through the thalamus and into the brain's visual cortex. In the visual cortex, the information gets processed in multiple stages and is ultimately sent to the prefrontal cortex — the area of the brain that makes decisions, including how to respond to a given stimulus.
However, not all information stays on this forward moving path. At the secondary stage of processing in the visual cortex some neurons reverse course and send information back to the first stage of processing. Researchers at Carnegie Mellon wondered if this feedback could change how the neurons in the visual cortex respond to a stimulus and alter the messages being sent to the prefrontal cortex.
While there has been a good deal of research studying how information moves forward through the visual system, less has been done to study the impact of the information that moves backward. To find out if the information traveling from the secondary stage of processing back to the first stage impacted how information is encoded in the visual system, the researchers needed to quantify the magnitude of information that was being sent from the second stage back to the first stage. Using a mouse model, they recorded normal neuronal firing in the first stage of the visual cortex as the mouse looked at moving patterns that represented edges. They then silenced the neurons in the second stage using modified optogenetic technology. This halted the feedback of information from the second stage back to the first stage, and allowed the researchers to determine how much of the neuronal activity in the first stage of visual processing was the result of feedback.
Twenty percent of the neuronal activity in the visual cortex was the result of feedback, a concept Kuhlman calls reciprocal connectivity. This indicates that some of the information coming from the visual cortex is not a direct response to a visual stimuli, but is a response to how the stimuli was perceived by higher cortical areas.
The feedback, she said, might be what causes our brain to complete the undrawn lines in the Kanizsa triangle. But more importantly, it signifies that studying neuronal feedback is important to our understanding of how the brain works to process stimuli.
"This represents a new way to study visual perception and neural computation. If we want to truly understand the visual pathway, and cortical function in general, we have to understand these reciprocal connection," Kuhlman said.