MIT News - Brain and cognitive scienceshttp://news.mit.edu/topic/mitbrain-cognitive-rss.xml
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.enFri, 16 Mar 2018 00:00:00 -0400Lending mind, hand, and heart http://news.mit.edu/2018/student-profile-isabella-pecorari-0316
Senior Isabella Pecorari is building supportive communities at MIT and beyond.Fri, 16 Mar 2018 00:00:00 -0400Fatima Husain | MIT News correspondenthttp://news.mit.edu/2018/student-profile-isabella-pecorari-0316<p>MIT senior Isabella Pecorari embarked on a path to medicine at a young age, beginning with a grade-school fascination with biology.</p>
<p>“I could not get enough,” Pecorari says, recalling how she used to stay after her biology class and ask questions from lists she wrote. “It was ridiculous. There was just no stopping me!”</p>
<p>Since the sixth grade, Pecorari has intended to develop her interest in biology to help others in a medical setting. When it came time to attend college, she knew MIT would be a good fit. “I knew that if I was going to be premed, I wanted to have a supportive environment,” Pecorari says. “I wanted to be in a team-building setting where I could work with other students through problem sets rather than struggling alone.”</p>
<p>Pecorari has thrived in that environment and is now on a mission to help grow the support systems for others at MIT. As the brain and cognitive sciences major pursues a career as a physician, she aims to apply what she’s learned while helping to foster mental health and wellness in her community.</p>
<p><strong>Helping others succeed</strong></p>
<p>Pecorari is passionate about mental health education and providing support to fellow students in need. “People think that MIT students have everything,” Pecorari says, “but mental health is not about what you see on the outside — it's more about what's going on inside.”</p>
<p>During her sophomore year, Pecorari joined Peer Ears, a student-run organization that fosters conversations about mental health and provides resources to students facing mental health crises.</p>
<p>“People are often afraid to come out and ask for help because there’s such a stigma around mental health issues,” she says. Now the president of Peer Ears, Pecorari hopes to destigmatize those mental health discussions in undergraduate dorms across campus. Peer Ears representatives are trained extensively by MIT mental health clinicians on how to reach out and respond to students facing mental health crises.</p>
<p>Under Pecorari’s leadership, the organization is creating a booklet for incoming freshman that presents information about mental health issues common to college students and the resources students can go to for help.</p>
<p>“It allows students who might be going through a difficult time to realize that they are not alone in the way they are feeling,” Pecorari says. She hopes the booklet will be complete in time for the incoming class of 2022.</p>
<p>Peer Ears also helps host dorm-wide study breaks where undergraduate students can stop by, talk to representatives, and study within a supportive atmosphere. The organization has also begun a care package program, funded by the MindHandHeart initiative, which provides food-filled care packages to students during spring final exam periods.</p>
<p>“We’d set up a booth in Lobby 10 where people could stop by and make a care package for themselves or a friend,” Pecorari recalls. “It was a big concern of ours that people weren’t always eating when it was a stressful time.”</p>
<p>The program kick-off was a success, and Peer Ears will hold another care package session at the end of this spring semester.</p>
<p>Pecorari also works on the executive board of MIT BrainTrust, a student-run organization in which students are paired with individuals from the greater Boston area who have survived traumatic brain injuries, for meetups throughout the year.</p>
<p>BrainTrust gives survivors “a buddy system — someone they can reach out to and talk to and rely on so they don’t feel alone,” Pecorari says. During her time at BrainTrust, Pecorari has invited physicians from the Boston area to host discussions during meetups, and she has curated panels discussing Alzheimer’s disease for attendees.</p>
<p>Pecorari is also president of Student-Alumni Association, and she previously served as a panhellenic recruitment counselor. In addition to her classes and extracurricular activities, Pecorari has been a teaching assistant for two courses: CC.5111 (Principles of Chemical Science) and 9.00 (Introduction to Psychological Science).</p>
<p><strong>Researching tomorrow’s remedies</strong></p>
<p>Pecorari’s curiosity about biology led her to begin formal research when she was a high school student, working at the Hospital for Special Surgery in New York City.</p>
<p>“The P.I. of the lab was at first very hesitant to bring me on board because usually you need to be at least 18 years old,” she says. “But I was persistent and I really just wanted to have the opportunity to get involved in research and see what it was all about.”</p>
<p>Pecorari joined the lab at 16 and was assigned tasks such as data analysis and creating presentations. When she turned 18, she began culturing cells and learning about the medicines prescribed to patients at the hospital.</p>
<p>These experiences primed her for the research-rich environment at MIT. During her sophomore year, Pecorari joined the lab of Institute Professor Ann Graybiel to learn how to develop microelectrodes that detect dopamine levels in animal brains. Pecorari liked the application of the work: It could be used to develop treatments for and understand the mechanisms behind Parkinson’s disease.</p>
<p>Currently, Pecorari works in the lab of Poitras Chair Professor of Neuroscience Guoping Feng, to develop animal models for Huntington’s disease that can be used to test possible treatments.</p>
<p>“It can often take months or even years to produce results you want in research,” Pecorari says. “I’m really appreciative of the opportunity to understand what goes on behind the scenes and to know that the work I’m doing today, no matter how small a part, can possibly help someone in the future. That’s what really drives me.”</p>
<p><strong>Determination, strength, and looking forward</strong></p>
<p>Pecorari is also an avid equestrian and has been riding since her childhood. When she was 11, she began to train a 3-year-old horse. The training took intense patience and determination; when Pecorari began, the horse wasn’t even used to wearing a bridle.</p>
<p>“It took a full two years before I could get a ride on him,” Pecorari recalls. She began to successfully compete with the horse in local and national competitions — a testament to her hard work.</p>
<p>During her sophomore year of college, Pecorari decided to train another horse.</p>
<p>“I knew that medical school was in the future and had the sense that [training a horse] takes up so much time and commitment. I probably wouldn’t have the opportunity to do this at a later date,” Pecorari says. “So I just went for it.”</p>
<p>At the beginning of the summer before her junior year, Pecorari was thrown from the horse. “He threw me from his back, trampled me, and broke my back in five places,” Pecorari says, “so that summer did not turn out as expected.”</p>
<p>During the intensive healing process, Pecorari experienced the role of a patient, which gave her a new view on medicine.</p>
<p>“I gained an understanding of what it was like to go through something that’s really scary, uncertain, and painful,” she says. “Even though this was a really traumatic experience, I tried to remain optimistic and to think of the benefits that could come out of it and how I could possibly use my experience to help people.”</p>
<p>Pecorari intends to apply that outlook to the rest of her time at MIT and to her future in medicine, supporting others and fostering community along the way.</p>
MIT senior Isabella Pecorari embarked on a path to medicine at a young age, beginning with a grade-school fascination with biology.Image: Jake BelcherProfile, Students, Undergraduate, School of Science, Brain and cognitive sciences, Health, Mental health, Medicine, Women, Women in STEM, Student life, Volunteering, outreach, public service, CommunityStudy finds early signatures of the social brainhttp://news.mit.edu/2018/study-finds-early-signatures-social-brain-0312
Children as young as 3 have brain network devoted to interpreting thoughts of other people.Mon, 12 Mar 2018 05:59:59 -0400Anne Trafton | MIT News Officehttp://news.mit.edu/2018/study-finds-early-signatures-social-brain-0312<p>Humans use an ability known as theory of mind every time they make inferences about someone else’s mental state — what the other person believes, what they want, or why they are feeling happy, angry, or scared.</p>
<p>Behavioral studies have suggested that children begin succeeding at a key measure of this ability, known as the false-belief task, around age 4. However, a new study from MIT has found that the brain network that controls theory of mind has already formed in children as young as 3.</p>
<p>The MIT study is the first to use functional magnetic resonance imaging (fMRI) to scan the brains of children as young as age 3 as they perform a task requiring theory of mind — in this case, watching a short animated movie involving social interactions between two characters.</p>
<p>“The brain regions involved in theory-of-mind reasoning are behaving like a cohesive network, with similar responses to the movie, by age 3, which is before kids tend to pass explicit false-belief tasks,” says Hilary Richardson, an MIT graduate student and the lead author of the study.</p>
<p>Rebecca Saxe, an MIT professor of brain and cognitive sciences and an associate member of MIT’s McGovern Institute for Brain Research, is the senior author of the paper, which appears in the March 12 issue of <em>Nature Communications</em>. Other authors are Indiana University graduate student Grace Lisandrelli and Wellesley College undergraduate Alexa Riobueno-Naylor.</p>
<p><strong>Thinking about others</strong></p>
<p>In 2003, Saxe first showed that theory of mind is seated in a brain region known as the right temporo-parietal junction (TPJ). The TPJ coordinates with other regions, including several parts of the prefrontal cortex, to form a network that is active when people think about the mental states of others.</p>
<p>The most commonly used test of theory of mind is the false-belief test, which probes whether the subject understands that other people may have beliefs that are not true. A classic example is the Sally-Anne test, in which a child is asked where Sally will look for a marble that she believes is in her own basket, but that Anne has moved to a different spot while Sally wasn’t looking. To pass, the subject must reply that Sally will look where she thinks the marble is (in her basket), not where it actually is.</p>
<p>Until now, neuroscientists had assumed that theory-of-mind studies involving fMRI brain scans could only be done with children at least 5 years of age, because the children need to be able to lie still in a scanner for about 20 minutes, listen to a series of stories, and answer questions about them.</p>
<p>Richardson wanted to study children younger than that, so that she could delve into what happens in the brain’s theory-of-mind network before the age of 5. To do that, she and Saxe came up with a new experimental protocol, which calls for scanning children while they watch a short movie that includes simple social interactions between two characters.</p>
<p>The animated movie they chose, called “Partly Cloudy,” has a plot that lends itself well to the experiment. It features Gus, a cloud who produces baby animals, and Peck, a stork whose job is to deliver the babies. Gus and Peck have some tense moments in their friendship because Gus produces baby alligators and porcupines, which are difficult to deliver, while other clouds create kittens and puppies. Peck is attacked by some of the fierce baby animals, and he isn’t sure if he wants to keep working for Gus.</p>
<p>“It has events that make you think about the characters’ mental states and events that make you think about their bodily states,” Richardson says.</p>
<p>The researchers spent about four years gathering data from 122 children ranging in age from 3 to 12 years. They scanned the entire brain, focusing on two distinct networks that have been well-characterized in adults: the theory-of-mind network and another network known as the pain matrix, which is active when thinking about another person’s physical state.</p>
<p>They also scanned 33 adults as they watched the movie so that they could identify scenes that provoke responses in either of those two networks. These scenes were dubbed theory-of-mind events and pain events. Scans of children revealed that even in 3-year-olds, the theory-of-mind and pain networks responded preferentially to the same events that the adult brains did.</p>
<p>“We see early signatures of this theory-of-mind network being wired up, so the theory-of-mind brain regions which we studied in adults are already really highly correlated with one another in 3-year-olds,” Richardson says.</p>
<p>The researchers also found that the responses in 3-year-olds were not as strong as in adults but gradually became stronger in the older children they scanned.</p>
<p><strong>Patterns of development</strong></p>
<p>The findings offer support for an existing hypothesis that says children develop theory of mind even before they can pass explicit false-belief tests, and that it continues to develop as they get older. Theory of mind encompasses many abilities, including more difficult skills such as understanding irony and assigning blame, which tend to develop later.</p>
<p>Another hypothesis is that children undergo a fairly sudden development of theory of mind around the age of 4 or 5, reflected by their success in the false-belief test. The MIT data, which do not show any dramatic changes in brain activity when children begin to succeed at the false-belief test, do not support that theory.</p>
<p>“Scientists have focused really intensely on the changes in children’s theory of mind that happen around age 4, when children get a better understanding of how people can have wrong or biased or misinformed beliefs,” Saxe says. “But really important changes in how we think about other minds happen long before, and long after, this famous landmark. Even 2-year-olds try to figure out why different people like different things — this might be why they get so interested in talking about everybody’s favorite colors. And even 9-year-olds are still learning about irony and negligence. Theory of mind seems to undergo a very long continuous developmental process, both in kids’ behaviors and in their brains.”</p>
<p>Now that the researchers have data on the typical trajectory of theory of mind development, they hope to scan the brains of autistic children to see whether there are any differences in how their theory-of-mind networks develop. Saxe’s lab is also studying children whose first exposure to language was delayed, to test the effects of early language on the development of theory of mind.</p>
<p>The research was funded by the National Science Foundation, the National Institutes of Health, and the David and Lucile Packard Foundation.</p>
MIT graduate student Hilary Richardson helps a child into an MRI scanner for a study of how children's brains develop the ability to think about the mental states of other people.Image: Trillium StudiosResearch, Behavior, Learning, Brain and cognitive sciences, McGovern Institute, School of ScienceMIT and Harvard join forces to address early childhood literacy http://news.mit.edu/2018/mit-and-harvard-join-forces-address-early-childhood-literacy-0306
Reach Every Reader aims to end early literacy crisis.Tue, 06 Mar 2018 12:00:00 -0500MIT Open Learninghttp://news.mit.edu/2018/mit-and-harvard-join-forces-address-early-childhood-literacy-0306<p>Today, MIT’s Integrated Learning Initiative (MITili), Harvard Graduate School of Education (HGSE), and Florida State University (FSU) <a href="https://www.gse.harvard.edu/reach-every-reader">announced a collaboration</a> to make sure every child learns to read well enough by the end of third grade to make learning more effective later in their education. This will be achieved through research on how personalized learning and intervention improve early childhood literacy.</p>
<p>Research shows that students who fail to read adequately in first grade have a 90 percent probability of reading poorly in fourth grade, and a 75 percent probability of reading poorly in high school. This compounds the need to level the playing field in literacy early in a child’s education.</p>
<p>This new collaboration, called Reach Every Reader, brings MITili, HGSE, and Florida State University researchers together to work on rigorous scientific approaches to personalized learning for literacy, to develop diagnostic tools and interventions to help young children at risk for literacy <em>before</em> they fail, and to build capacity among educators, caregivers, and policy makers to advance ongoing conversations and instructional strategies around personalized learning.&nbsp; The initiative is supported by a $30 million grant from Priscilla Chan and Mark Zuckerberg, co-founders of the Chan Zuckerberg Initiative.</p>
<p>"For a young child, struggling to read can be a crushing blow with lifelong consequences. Multiply that experience by millions of children, and it's a crisis for our society," says MIT President L. Rafael Reif. "At MIT, we approach the problem as scientists and engineers: by seeking to understand the brain science of how learning happens, and by building innovative technologies and solutions to help. We are delighted to be able to contribute in these ways to the exciting collaboration behind Reach Every Reader."</p>
<p>“We are excited to support the launch of Reach Every Reader, a unique combination of cutting-edge education and neuroscience research to better understand how we can help every kid stay on track to reading on grade level by the end of third grade. I know from my work at The Primary School how important it is to identify learning barriers students face early and provide them with the right supports to succeed,” says Chan, who is also the founder and CEO of The Primary School. "This new program represents the type of bold, innovative thinking that we believe will help build a future for everyone and enable transformative learning experiences.”</p>
<p>“This new collaboration between MITili and HGSE synergizes MIT’s strengths in science and engineering with HGSE’s expertise in the education of children. In addition, working with researchers in the Florida Center for Reading Research and College of Communication and Information at FSU will help us gain expertise in early literacy screening and assessment. We need all this knowledge to improve education, especially for children most vulnerable to falling behind,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology, a professor in brain and cognitive sciences, and director of MITili.</p>
<p>Gabrieli and his collaborators are developing a web-based tool for the early identification of reading challenges to help direct children immediately toward personalized interventions. This work builds on Gabrieli’s research on the neural and cognitive development of learning in children, and the ways neuroscience can inform and advance educational outcomes. A key component of Reach Every Reader is to examine the interventions that work for which student, building substantive research in this emerging field. The team will work with school partners to deliver these interventions to kindergarten students in summer programs and, longer term, implement these tools into the school curriculum.</p>
<p>“Nothing is more fundamental to all aspects of education and citizenship than the power to read,” Gabrieli explains. “This collaboration is inspired by the mission of trying to have every child, regardless of circumstance, learn to read well enough by third grade so that every child can read to learn throughout the schooling and workplace years.”</p>
<p>Reach Every Reader is part of MITili’s larger vision to advance multidisciplinary research on the science of learning that will inform and strengthen approaches to preK-12 education. “Science is continuously shedding more light on how we learn, and how we ought to teach,” says Sanjay Sarma, vice president for Open Learning at MIT. “MITili and HGSE are addressing early childhood literacy head-on through this collaboration.”</p>
<p>The initiative is funded by Chan and Zuckerberg, who founded the Chan Zuckerberg Initiative (CZI) together in 2015. The philanthropic organization supports a range of educational research initiatives, focusing on four key milestones: kindergarten readiness, third-grade literacy and math, high school transitions, and postsecondary success.</p>
Research shows that students who fail to read adequately in first grade have a 90 percent probability of reading poorly in fourth grade, and a 75 percent probability of reading poorly in high school.Office of Open Learning, MITili, Brain and cognitive sciences, School of Science, K-12 education, education, Education, teaching, academics, Office of Digital Learning, Learning, LanguageStudy reveals how the brain tracks objects in motionhttp://news.mit.edu/2018/study-reveals-how-brain-tracks-objects-motion-0306
Timing and speed are both important for making accurate estimates of how an object will travel.Mon, 05 Mar 2018 23:59:59 -0500Anne Trafton | MIT News Officehttp://news.mit.edu/2018/study-reveals-how-brain-tracks-objects-motion-0306<p>Catching a bouncing ball or hitting a ball with a racket requires estimating when the ball will arrive. Neuroscientists have long thought that the brain does this by calculating the speed of the moving object. However, a new study from MIT shows that the brain’s approach is more complex.</p>
<p>The new findings suggest that in addition to tracking speed, the brain incorporates information about the rhythmic patterns of an object’s movement: for example, how long it takes a ball to complete one bounce. In their new study, the researchers found that people make much more accurate estimates when they have access to information about both the speed of a moving object and the timing of its rhythmic patterns.</p>
<p>“People get really good at this when they have both types of information available,” says Mehrdad Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences and a member of MIT’s McGovern Institute for Brain Research. “It’s like having input from multiple senses. The statistical knowledge that we have about the world we’re interacting with is richer when we use multiple senses.”</p>
<p>Jazayeri is the senior author of the study, which appears in the <em>Proceedings of the National Academy of Sciences</em> the week of March 5. The paper’s lead author is MIT graduate student Chia-Jung Chang.</p>
<p><strong>Objects in motion</strong></p>
<p>Much of the information we process about objects moving around us comes from visual tracking of the objects. Our brains can use information about an object’s speed and the distance it has to cover to calculate when it will reach a certain point. Jazayeri, who studies how the brain keeps time, was intrigued by the fact that much of the movement we see also has a rhythmic element, such as the bouncing of a ball.&nbsp;</p>
<p>“It occurred to us to ask, how can it be that the brain doesn’t use this information? It would seem very strange if all this richness of additional temporal structure is not part of the way we evaluate where things are around us and how things are going to happen,” Jazayeri says.</p>
<p>There are many other sensory processing tasks for which the brain uses multiple sources of input. For example, to interpret language, we use both the sound we hear and the movement of the speaker’s lips, if we can see them. When we touch an object, we estimate its size based on both what we see and what we feel with our fingers.</p>
<p>In the case of perceiving object motion, teasing out the role of rhythmic timing, as opposed to speed, can be difficult. “I can ask someone to do a task, but then how do I know if they’re using speed or they’re using time, if both of them are always available?” Jazayeri says.</p>
<p>To overcome that, the researchers devised a task in which they could control how much timing information was available. They measured performance in human volunteers as they performed the task.</p>
<p>During the task, the study participants watched a ball as it moved in a straight line. After traveling some distance, the ball went behind an obstacle, so the participants could no longer see it. They were asked to press a button at the time when they expected the ball to reappear.</p>
<p>Performance varied greatly depending on how much of the ball’s path was visible before it went behind the obstacle. If the participants saw the ball travel a very short distance before disappearing, they did not do well. As the distance before disappearance became longer, they were better able to calculate the ball’s speed, so their performance improved but eventually plateaued.</p>
<p>After that plateau, there was a significant jump in performance when the distance before disappearance grew until it was exactly the same as the width of the obstacle. In that case, when the path seen before disappearance was equal to the path the ball traveled behind the obstacle, the participants improved dramatically, because they knew that the time spent behind the obstacle would be the same as the time it took to reach the obstacle.</p>
<p>When the distance traveled to reach the obstacle became longer than the width of the obstacle, performance dropped again.</p>
<p>“It’s so important to have this extra information available, and when we have it, we use it,” Jazayeri says. “Temporal structure is so important that when you lose it, even at the expense of getting better visual information, people’s performance gets worse.”</p>
<p><strong>Integrating information</strong></p>
<p>The researchers also tested several computer models of how the brain performs this task, and found that the only model that could accurately replicate their experimental results was one in which the brain measures speed and timing in two different areas and then combines them.</p>
<p>Previous studies suggest that the brain performs timing estimates in premotor areas of the cortex, which plays a role in planning movement; speed, which usually requires visual input, is calculated in visual cortex. These inputs are likely combined in parts of the brain responsible for spatial attention and tracking objects in space, which occurs in the parietal cortex, Jazayeri says.</p>
<p>In future studies, Jazayeri hopes to measure brain activity in animals trained to perform the same task that human subjects did in this study. This could shed further light on where this processing takes place and could also reveal what happens in the brain when it makes incorrect estimates.</p>
<p>The research was funded by the McGovern Institute for Brain Research.</p>
Catching a bouncing ball or hitting a ball with a racket requires estimating when the ball will arrive. Neuroscientists have long thought that the brain does this by calculating the speed of the moving object. However, a new study from MIT shows that the brain's approach is more complex.Image: Chelsea Turner/MITResearch, Brain and cognitive sciences, McGovern Institute, School of ScienceViral tool traces long-term neuron activityhttp://news.mit.edu/2018/viral-tool-traces-long-term-neuron-activity-0305
New technique is nontoxic to cells, should allow scientists to study neuron function for months or years instead of weeks.Mon, 05 Mar 2018 10:59:59 -0500Anne Trafton | MIT News Officehttp://news.mit.edu/2018/viral-tool-traces-long-term-neuron-activity-0305<p>For the past decade, neuroscientists have been using a modified version of the rabies virus to label neurons and trace the connections between them. Although this technique has proven very useful, it has one major drawback: The virus is toxic to cells and can’t be used for studies longer than about two weeks.</p>
<p>Researchers at MIT and the Allen Institute for Brain Science have now developed a new version of this virus that stops replicating once it infects a cell, allowing it to deliver its genetic cargo without harming the cell. Using this technique, scientists should be able to study the infected neurons for several months, enabling longer-term studies of neuron functions and connections.</p>
<p>“With the first-generation vectors, the virus is replicating like crazy in the infected neurons, and that’s not good for them,” says Ian Wickersham, a principal research scientist at MIT’s McGovern Institute for Brain Research and the senior author of the new study. “With the second generation, infected cells look normal and act normal for at least four months — which was as long as we tracked them — and probably for the lifetime of the animal.”</p>
<p>Soumya Chatterjee of the Allen Institute is the lead author of the paper, which appears in the March 5 issue of <em>Nature Neuroscience</em>.</p>
<p><img alt="" src="/sites/mit.edu.newsoffice/files/rabies-virus-S1.gif" style="width: 595px; height: 298px;" /></p>
<p><span style="font-size:10px;"><em>Using two-photon microscopy, researchers can image fluorescent cells in the brains of live mice. These two images were taken of the same group of neurons in visual cortex at nine days (left) and 22 days (right) following injection of a first-generation rabies viral vector encoding a red fluorescent protein. The vast majority of infected neurons visible at the earlier time point are gone by the later imaging session.</em></span></p>
<p><span style="font-size:10px;"><em><img alt="" src="/sites/mit.edu.newsoffice/files/rabies-virus-S4.gif" style="width: 595px; height: 298px;" /></em></span></p>
<p><span style="font-size:10px;"><em>These two images show the same group of neurons in visual cortex at four weeks (left) and eight weeks (right) following injection of a second-generation rabies viral vector encoding Cre recombinase, which causes cells in these transgenic mice to express a red fluorescent protein. All neurons visible at the earlier time point are still present at the later one.</em></span></p>
<p><strong>Viral tracing</strong></p>
<p>Rabies viruses are well-suited for tracing neural connections because they have evolved to spread from neuron to neuron through junctions known as synapses. The viruses can also spread from the terminals of axons back to the cell body of the same neuron. Neuroscientists can engineer the viruses to carry genes for fluorescent proteins, which are useful for imaging, or for light-sensitive proteins that can be used to manipulate neuron activity.</p>
<p>In 2007, Wickersham demonstrated that a modified version of the rabies virus could be used to trace synapses between only directly connected neurons. Before that, researchers had been using the rabies virus for similar studies, but they were unable to keep it from spreading throughout the entire brain.</p>
<p>By deleting one of the virus’ five genes, which codes for a glycoprotein normally found on the surface of infected cells, Wickersham was able to create a version that can only spread to neurons in direct contact with the initially infected cell. This 2007 modification enabled scientists to perform “monosynaptic tracing,” a technique that allows them to identify connections between the infected neuron and any neuron that provides input to it.</p>
<p>This first generation of the modified rabies virus is also used for a related technique known as retrograde targeting, in which the virus can be injected into a cluster of axon terminals and then travel back to the cell bodies of those axons. This can help researchers discover the location of neurons that send impulses to the site of the virus injection.</p>
<p>Researchers at MIT have <a href="http://news.mit.edu/2015/neurons-assign-good-bad-emotions-0429">used retrograde targeting</a> to identify populations of neurons of the basolateral amygdala that project to either the nucleus accumbens or the central medial amygdala. In that type of study, researchers can deliver optogenetic proteins that allow them to manipulate the activity of each population of cells. By selectively stimulating or shutting off these two separate cell populations, researchers can determine their functions.</p>
<p><strong>Reduced toxicity</strong></p>
<p>To create the second-generation version of this viral tool, Wickersham and his colleagues deleted the gene for the polymerase enzyme, which is necessary for transcribing viral genes. Without this gene, the virus becomes less harmful and infected cells can survive much longer. In the new study, the researchers found that neurons were still functioning normally for up to four months after infection.</p>
<p>“The second-generation virus enters a cell with its own few copies of the polymerase protein and is able to start transcribing its genes, including the transgene that we put into it. But then because it’s not able to make more copies of the polymerase, it doesn’t have this exponential takeover of the cell, and in practice it seems to be totally nontoxic,” Wickersham says.</p>
<p>The lack of polymerase also greatly reduces the expression of whichever gene the researchers engineer into the virus, so they need to employ a little extra genetic trickery to achieve their desired outcome. Instead of having the virus deliver a gene for a fluorescent or optogenetic protein, they engineer it to deliver a gene for an enzyme called Cre recombinase, which can delete target DNA sequences in the host cell’s genome.</p>
<p>This virus can then be used to study neurons in mice whose genomes have been engineered to include a gene that is turned on when the recombinase cuts out a small segment of DNA. Only a small amount of recombinase enzyme is needed to turn on the target gene, which could code for a fluorescent protein or another type of labeling molecule. The second-generation viruses can also work in regular mice if the researchers simultaneously inject another virus carrying a recombinase-activated gene for a fluorescent protein.</p>
<p>The new paper shows that the second-generation virus works well for retrograde labeling, not tracing synapses between cells, but the researchers have also now begun using it for monosynaptic tracing.</p>
<p>The research was funded by the National Institute of Mental Health, the National Institute on Aging, and the National Eye Institute.</p>
Researchers at MIT and the Allen Institute for Brain Science have developed a new modified version of the rabies virus that stops replicating once it infects a cell, allowing it to deliver its genetic cargo without harming the cell. Courtesy of the researchersResearch, Brain and cognitive sciences, McGovern Institute, School of Science, National Institutes of Health (NIH), NeuroscienceEdward Boyden named inaugural Y. Eva Tan Professor in Neurotechnologyhttp://news.mit.edu/2018/edward-boyden-named-inaugural-y-eva-tan-professor-neurotechnology-0305
Neuroengineering leader appointed to new professorship at the McGovern Institute for Brain Research.
Mon, 05 Mar 2018 09:00:00 -0500Julie Pryor | McGovern Institute for Brain Researchhttp://news.mit.edu/2018/edward-boyden-named-inaugural-y-eva-tan-professor-neurotechnology-0305<p>Edward S. Boyden, a member of MIT’s McGovern Institute for Brain Research and the Media Lab, and an associate professor of brain and cognitive sciences and biological engineering at MIT, has been appointed the inaugural Y. Eva Tan Professor in Neurotechnology. The new professorship has been established at the McGovern Institute by K. Lisa Yang in honor of her daughter Y. Eva Tan.</p>
<p>“We are thrilled Lisa has made a generous investment in neurotechnology and the McGovern Institute by creating this new chair,” says Robert Desimone, director of the McGovern Institute. “Ed’s body of work has already transformed neuroscience and biomedicine, and this chair will help his team to further develop revolutionary tools that will have a profound impact on research worldwide.”</p>
<p>In 2017, Yang co-founded the <a href="https://mcgovern.mit.edu/TanYangCenter" target="_blank">Hock E. Tan and K. Lisa Yang Center for Autism Research</a> at the McGovern Institute. The center catalyzes interdisciplinary and cutting-edge research into the genetic, biological, and brain bases of autism spectrum disorders. In late 2017, Yang grew the center with the establishment of the endowed J. Douglas Tan Postdoctoral Research Fund, which supports talented postdocs in the lab of Poitras Professor of Neuroscience Guoping Feng.</p>
<p>“I am excited to further expand the Hock E. Tan and K. Lisa Yang Center for Autism Research and to support Ed and his team’s critical work,” says Yang. “Novel technology is the driving force behind much-needed breakthroughs in brain research — not just for individuals with autism, but for those living with all brain disorders. My daughter Eva and I are greatly pleased to recognize Ed’s talent and to contribute toward his future successes.”</p>
<p>Yang’s daughter agrees.&nbsp;“I’m so pleased this professorship will have a significant&nbsp;and lasting impact on MIT’s pioneering work in neurotechnology,” says Tan. “My family and I have always believed that advances in technology are what make all scientific progress possible, and I’m overjoyed that we can help enable amazing&nbsp;discoveries in the Boyden Lab&nbsp;through Ed’s appointment to this chair.”</p>
<p>Boyden has pioneered the development of many transformative technologies that image, record, and manipulate complex systems, including optogenetics, expansion microscopy, and robotic patch clamping. He has received numerous awards for this work, including the Breakthrough Prize in Life Sciences (2016), the BBVA Foundation Frontiers of Knowledge Award (2015), the Carnegie Prize in Mind and Body Sciences (2015), the Grete Lundbeck European Brain Prize (2013), and the Perl-UNC Neuroscience prize (2011). Boyden is an elected member of the American Academy of Arts and Sciences and the National Academy of Inventors.</p>
<p>“I deeply appreciate the honor that comes with being named the first Y. Eva Tan Professor in Neurotechnology,” says Boyden. “This is a tremendous recognition of not only my team’s work, but the groundbreaking impact of the neurotechnology field.”</p>
<p>Boyden joined MIT in 2007 as an assistant professor at the Media Lab, and later was appointed as a joint professor in the departments of Brain and Cognitive Sciences and Biological Engineering and an investigator in the McGovern Institute. In 2011, he was named the Benesse Career Development Professor, and in 2013 he was awarded the AT&amp;T Career Development Professorship. Seven years after arriving at MIT, he was awarded tenure. Boyden earned his BS and MEng from MIT in 1999 and his PhD in Neuroscience from Stanford University in 2005.</p>
Y. Eva Tan (left) and McGovern investigator Edward Boyden (right)Photo: Justin KnightBrain and cognitive sciences, Biological engineering, McGovern Institute, Media Lab, School of Science, School of Engineering, Faculty, Awards, honors and fellowships, Nanoscience and nanotechnology, School of Architecture and Planning, GivingSeeing the brain&#039;s electrical activityhttp://news.mit.edu/2018/seeing-brains-electrical-activity-0226
Fluorescent sensor allows imaging of neurons&#039; electrical communications, without electrodes.Mon, 26 Feb 2018 11:00:00 -0500Anne Trafton | MIT News Officehttp://news.mit.edu/2018/seeing-brains-electrical-activity-0226<p>Neurons in the brain communicate via rapid electrical impulses that allow the brain to coordinate behavior, sensation, thoughts, and emotion. Scientists who want to study this electrical activity usually measure these signals with electrodes inserted into the brain, a task that is notoriously difficult and time-consuming.</p>
<p>MIT researchers have now come up with a completely different approach to measuring electrical activity in the brain, which they believe will prove much easier and more informative. They have developed a light-sensitive protein that can be embedded into neuron membranes, where it emits a fluorescent signal that indicates how much voltage a particular cell is experiencing. This could allow scientists to study how neurons behave, millisecond by millisecond, as the brain performs a particular function.</p>
<p>“If you put an electrode in the brain, it’s like trying to understand a phone conversation by hearing only one person talk,” says Edward Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT. “Now we can record the neural activity of many cells in a neural circuit and hear them as they talk to each other.”</p>
<p>Boyden, who is also a member of MIT’s Media Lab, McGovern Institute for Brain Research, and Koch Institute for Integrative Cancer Research, and an HHMI-Simons Faculty Scholar, is the senior author of the study, which appears in the Feb. 26 issue of <em>Nature Chemical Biology</em>. The paper’s lead authors are MIT postdocs Kiryl Piatkevich and Erica Jung.</p>
<p><strong>Imaging voltage</strong></p>
<p>For the past two decades, scientists have sought a way to monitor electrical activity in the brain through imaging instead of recording with electrodes. Finding fluorescent molecules that can be used for this kind of imaging has been difficult; not only do the proteins have to be very sensitive to changes in voltage, they must also respond quickly and be resistant to photobleaching (fading that can be caused by exposure to light).</p>
<p>Boyden and his colleagues came up with a new strategy for finding a molecule that would fulfill everything on this wish list: They built a robot that could screen millions of proteins, generated through a process called directed protein evolution, for the traits they wanted.</p>
<p>“You take a gene, then you make millions and millions of mutant genes, and finally you pick the ones that work the best,” Boyden says. “That’s the way that evolution works in nature, but now we’re doing it in the lab with robots so we can pick out the genes with the properties we want.”</p>
<p>The researchers made 1.5 million mutated versions of a light-sensitive protein called QuasAr2, which was previously engineered by Adam Cohen’s lab at Harvard University. (That work, in turn, was based on the molecule Arch, which <a href="http://news.mit.edu/2010/brain-control-0107">the Boyden lab reported in 2010</a>.) The researchers put each of those genes into mammalian cells (one mutant per cell), then grew the cells in lab dishes and used an automated microscope to take pictures of the cells. The robot was able to identify cells with proteins that met the criteria the researchers were looking for, the most important being the protein’s location within the cell and its brightness.</p>
<p>The research team then selected five of the best candidates and did another round of mutation, generating 8 million new candidates. The robot picked out the seven best of these, which the researchers then narrowed down to one top performer, which they called Archon1.</p>
<p><strong>Mapping the brain</strong></p>
<p>A key feature of Archon1 is that once the gene is delivered into a cell, the Archon1 protein embeds itself into the cell membrane, which is the best place to obtain an accurate measurement of a cell’s voltage.</p>
<p>Using this protein, the researchers were able to measure electrical activity in mouse brain tissue, as well as in brain cells of zebrafish larvae and the worm <em>Caenorhabditis elegans</em>. The latter two organisms are transparent, so it is easy to expose them to light and image the resulting fluorescence. When the cells are exposed to a certain wavelength of reddish-orange light, the protein sensor emits a longer wavelength of red light, and the brightness of the light corresponds to the voltage of that cell at a given moment in time.</p>
<p>The researchers also showed that Archon1 can be used in conjunction with light-sensitive proteins that are commonly used to silence or stimulate neuron activity — these are known as <a href="https://news.mit.edu/2014/optogenetic-toolkit-goes-multicolor-0209">optogenetic proteins</a> — as long as those proteins respond to colors other than red. In experiments with <em>C. elegans</em>, the researchers demonstrated that they could stimulate one neuron using blue light and then use Archon1 to measure the resulting effect in neurons that receive input from that cell.</p>
<p>Cohen, the Harvard professor who developed the predecessor to Archon1, says the new MIT protein brings scientists closer to the goal of imaging millisecond-timescale electrical activity in live brains.</p>
<p>“Traditionally, it has been excruciatingly labor-intensive to engineer fluorescent voltage indicators, because each mutant had to be cloned individually and then tested through a slow, manual patch-clamp electrophysiology measurement.&nbsp;The Boyden lab developed a very clever high-throughput screening approach to this problem,” says Cohen, who was not involved in this study. “Their new reporter looks really great in fish and worms and in brain slices. I’m eager to try it in my lab.”</p>
<p>The researchers are now working on using this technology to measure brain activity in mice as they perform various tasks, which Boyden believes should allow them to map neural circuits and discover how they produce specific behaviors.</p>
<p>“We will be able to watch a neural computation happen,” he says. “Over the next five years or so we’re going to try to solve some small brain circuits completely. Such results might take a step toward understanding what a thought or a feeling actually is.”</p>
<p>The research was funded by the HHMI-Simons Faculty Scholars Program, the IET Harvey Prize, the MIT Media Lab, the New York Stem Cell Foundation Robertson Award, the Open Philanthropy Project, John Doerr, the Human Frontier Science Program, the Department of Defense, the National Science Foundation, and the National Institutes of Health, including an NIH Director’s Pioneer Award.</p>
MIT researchers have developed a light-sensitive protein that can be embedded into neuron membranes, where it emits a fluorescent signal that indicates how much voltage a particular cell is experiencing. This could allow scientists to study how neurons behave, millisecond by millisecond, as the brain performs a particular function.Courtesy of the researchersResearch, Brain and cognitive sciences, Biological engineering, Media Lab, McGovern Institute, School of Science, School of Engineering, School of Architecture and Planning, Koch Institute, National Science Foundation (NSF), National Institutes of Health (NIH)Seeking materials that match the brainhttp://news.mit.edu/2018/faculty-profile-polina-anikeeva-0219
Polina Anikeeva explores ways to make neural probes that are compatible with delicate biological tissues.Sun, 18 Feb 2018 23:59:59 -0500David L. Chandler | MIT News Officehttp://news.mit.edu/2018/faculty-profile-polina-anikeeva-0219<p>Polina Anikeeva was born in Leningrad, USSR, but grew up in St. Petersburg, Russia; the city’s name reverted to its original form after the fall of the Soviet Union. While in school there, she encountered two inspiring scientists who helped propel her toward a career at MIT, where she now develops cutting-edge materials to help researchers probe the mysteries of the brain.</p>
<p>Anikeeva’s parents are both engineers, and she became interested very early on in figuring out how to make things that hadn’t been made before. She has pursued that passion through all her work — as the Class of 1942 Associate Professor in Materials Science and Engineering and in her other activities. She has been an active climber and runner (she <a href="http://news.mit.edu/2017/sanity-of-the-long-distance-runner-polina-anikeeva-0413">ran her third Boston Marathon</a> last year), and as an avid artist she occasionally creates paintings to illustrate her scientific research or help her students visualize scientific concepts.</p>
<p>One of her earliest influences, she says, was Mikhail Georgievich Ivanov, the founder of a small math and science magnet school she attended in St. Petersburg, and her physics teacher there. “He was just a brilliant physics teacher and educated a lot of scientists who are now scattered across the world. He was a scientist himself and had worked in a research lab. But then he realized that his real passion and talent was educating kids,” she recalls.</p>
<p>She met her next significant mentor during her years as an undergraduate at the St. Petersburg State Polytechnic University, when she worked with <em>Tatiana Birshtein</em>, a professor of polymer physics at the Institute of Macromolecular Compounds of the Russian Academy of Sciences. “She was generous and adventurous enough to essentially get me a UROP position in her lab. And I had no idea how prominent she was, but as time went on it became clear that she’s actually one of the pioneers of polymer physics and went to win the L’Oréal-UNESCO Award for Women in Science the same year as Millie Dresselhaus. ... So I got really lucky to work with someone of that stature very early on.”</p>
<p>From there, Anikeeva spent her senior year at the Swiss Federal Institute of Technology in Zurich and then went on to an internship at Los Alamos National Laboratory in New Mexico, where her work took a turn, introducing her to spectroscopy, nanomaterials, and quantum dot solar cells. As she was trying to decide between graduate programs in physics or chemistry, she met an intern from MIT’s Department of Materials Science and Engineering, and decided to try that as a way of combining those fields. She completed her doctorate at MIT in materials science and engineering, and last year earned tenure as an associate professor in that department.</p>
<p>“It was very clear that I should go to MIT,” she says, “not because the faculty were really smart — faculty are pretty smart everywhere. It was because the students were really inspiring. They were really committed to their research but also had a really broad understanding of what is going on around them, from a research perspective and also from how it builds into the real world. … I wanted to be surrounded by really remarkable people.”</p>
<p>One of those people she met at MIT was to become her partner: a fellow faculty member, Warren Hoburg, who was in the aeronautics and astronautics department. While their relationship was very convenient with both of them working at MIT, she says, it suddenly became more complicated last year, when he <a href="http://news.mit.edu/2017/nasa-selects-three-from-mit-for-astronaut-training-0613">was selected</a> as part of NASA’s 2017 class of new astronauts. Now, “we’ll have to commute between Houston and Boston,” she says.</p>
<p>Soon after earning her PhD working with light-emitting quantum dots, Anikeeva determined that she was not interested in research aimed at making incremental improvements. “What I really wanted to do was not just improve devices that exist. I wanted to build devices that didn’t exist,” she says. She decided that biology was an area where a materials scientist could make significant contributions in developing new devices that could have a direct benefit for humanity.</p>
<p>Her first foray into moving from physics into biology produced an immediate surprise. &nbsp;The first time she felt a mouse brain, she was startled by its pudding-like consistency that was so different from the stiff, brittle materials she was used to handling in her work in optoelectronics. That immediately began a quest that has been a major theme of her research ever since: developing materials that can be used as probes to deliver stimuli deep into the brain and that are flexible enough to match the movements of the surrounding brain tissue without causing damage.</p>
<p>Already, she and her students have developed multipurpose fibers that can deliver electrical, optical, and chemical signals to individual neurons in the brain, while matching the stretchiness and flexibility of brain tissue. They have also developed similar flexible implantable fibers that can be implanted into the spinal cord. These devices can be used for basic research to analyze spinal neural pathways and responses in animals that are awake and active, whereas existing methods with stiff implants require the animals to be anesthetized and immobilized.</p>
<p>Since then, she has extended her research to include ways of stimulating localized brain areas without any invasive contact at all, using magnetic fields to activate nanoparticles injected into specific locations. The system could be used for brain research and potentially for disease treatment, Anikeeva says.</p>
<p>The work is constantly exciting, she says. “There’s really nothing like it, to see a neural interface experiment work, because it’s not like waiting for a gene to be expressed or for a tissue to change in some way. You know when neurons fire — you see it right away. And this is really very addictive. I think all my students, even though they’re all engineers working on materials or devices, they all essentially can’t wait to introduce their tools into the animal model or into the tissue model to see those neurons flash.”</p>
<p>Anikeeva sees a fertile future in this field in which she has already done groundbreaking work. “The nervous system is just really a huge [scientific] problem, and being able to develop tools to understand it and study it I think will be a task sufficient for a lifetime. That’s especially true if we start looking at not just the brain but also interactions between the brain and the peripheral nervous system, because it turns out we are wired to the max. Every single one of our organs is wired, and we have no idea of what that wiring is doing.”</p>
<p>“So I think taken all together,” she says, “there are many, many lifetimes of work in this field where a materials scientist can contribute.”</p>
Polina Anikeeva was born in Leningrad, USSR, but grew up in St. Petersburg, Russia – the same city, whose name reverted to its original form after the fall of the Soviet Union. While in school there, she encountered two inspiring scientists who helped propel her toward a career at MIT, where she now develops cutting-edge materials to help researchers probe the mysteries of the brain in unprecedented ways.
Image: Bryce VickmarkFaculty, Profile, Materials Science and Engineering, Nanoscience and nanotechnology, Neuroscience, DMSE, Research, Brain and cognitive sciences, School of EngineeringPutting the brain at the center of anesthesiologyhttp://news.mit.edu/2018/mit-aaas-brown-explains-how-statistics-neuroscience-improve-anesthesiology-0216
Emery N. Brown explains how statistics and neuroscience improve anesthesiology at the American Association for the Advancement of Science&#039;s annual meeting.Fri, 16 Feb 2018 13:30:00 -0500David Orenstein | Picower Institute for Learning and Memoryhttp://news.mit.edu/2018/mit-aaas-brown-explains-how-statistics-neuroscience-improve-anesthesiology-0216<p>It’s intuitive that anesthesia operates in the brain, but the standard protocol among anesthesiologists when monitoring and dosing patients during surgery is to rely on indirect signs of arousal such as movement, and changes in heart rate and blood pressure. Through research in brain science and statistical modeling, <a href="http://picower.mit.edu/emery-n-brown" target="_blank">Emery N. Brown</a>, an anesthesiologist at Massachusetts General Hospital and neuroscientist at MIT’s Picower Institute for Learning and Memory, is putting the brain at the center of the field.</p>
<p>His findings allow him to safely give less anesthesia, for example, which can have important benefits for patients.</p>
<p>The key has been to develop a theoretical (i.e. neuroscientific) and analytical (i.e. statistical) understanding of electroencephalogram (EEG) brain wave measurements of patients under general anesthesia. In a presentation&nbsp;at the annual meeting of the <a href="https://aaas.confex.com/aaas/2018/meetingapp.cgi/Paper/21128" target="_blank">American Association for the Advancement of Science </a>in Austin, Texas, on Feb. 16, Brown described how anesthesia’s effects in the brain produce specific patterns of brain waves and how monitoring them via EEG data can lead to better care. He spoke as part of a broader discussion on the use of data analysis in brain research.</p>
<p>“We should use neuroscience and neuroscience paradigms to try to understand what’s happening in the brain under general anesthesia,” said Brown, the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience in the Institute for Medical Engineering and Science and the Department of Brain and Cognitive Sciences at MIT. “It’s a neurophysiological process that affects the brain and central nervous system, so how can it be that what’s being developed in the neuroscience field is not being brought to bear to the question of the brain under anesthesia?”</p>
<p>In numerous papers over more than a decade, Brown has helped the field understand how different anesthesia drugs,&nbsp;such as propofol, dexmedetomidine, and sevofluran,&nbsp;interact with various neuronal receptors, affecting circuits in different regions of the brain. Those neurophysiological effects ultimately give rise to a state of unconsciousness — essentially a “reversible coma” —&nbsp;characterized by powerful, low frequency brain waves that essentially overwhelm the normal rhythms that synchronize&nbsp;various brain functions including&nbsp;sensory perception, higher cognition, and&nbsp;motor control.</p>
<p>Understanding anesthesia to this degree allows for practical insights. In an October 2016 study in in the <a href="http://www.pnas.org/content/113/45/12826/tab-article-info" target="_blank"><em>Proceedings of the National Academy of Sciences</em></a>, for example, Brown and colleague Ken Solt showed how stimulating dopamine-producing neurons in the ventral tegmental area of the brain could wake mice up from general anesthesia. The study suggests a way human patients could be awakened as well, which could lessen side effects, recover normal brain function more rapidly and help patients move more quickly out of the operating room and into recovery.</p>
<p>In parallel with illuminating the neuroscience of general anesthesia, Brown has developed statistical methods to finely analyze EEG measurements to the point where anesthesiologists can apply that knowledge to patients. Brown has shown, for instance, that EEG readings of level of unconsciousness vary in characteristic ways based on the drug, its dose, and the patient’s age.</p>
<p>“The deciphering of how these drugs are acting in the brain turns out to be an important signal processing question,” Brown said. “The drugs work by producing oscillations, these oscillations are readily visible in the EEG and they change very systematically with drug dose, class, and age.”</p>
<p>During the course of every surgery, Brown said, he uses real-time EEG readings to keep a patient adequately dosed without giving too much. In a recent case involving an 81-year-old cancer patient, Brown said he was able to comfortably administer about a third the supposedly needed dose. This can be especially important for older patients.</p>
<p>“We already know you don’t have to give older people as much, but it turns out it can be even less,” said Brown, who is one of the founding investigators of MIT’s <a href="http://picower.mit.edu/about/aging-brain-initiative" target="_blank">Aging Brain Initiative</a>.</p>
<p>Older patients are especially susceptible to problematic side effects when they wake up including delirium or post-operative cognitive dysfunction. Neuroscientifically informed ways to prevent giving too much anesthesia can help prevent such problems, Brown said.</p>
<p>In his latest paper, published last month in the <a href="http://www.pnas.org/content/115/1/E5" target="_blank"><em>Proceedings of the National Academy of Sciences</em> </a>, Brown’s group led by postdoctoral fellow Seong-Eun Kim presented a powerful new algorithm for analyzing data sets, like EEGs, where waves vary over time. The SS-MT method produces high-resolution, low-noise spectrograms from such data. In the study, he used SS-MT to discern variations in EEGs with different states of consciousness in patients who had received propofol.</p>
<p>“If we can get clearer, cleaner noise-free measures of the spectrogram then we can infer the states even better,” he said.</p>
<p>Brown said he doesn’t want his methods to stay just with him. To help put the brain at the center of practice, Brown has developed&nbsp;training materials&nbsp;and made them freely available at <a href="http://www.anesthesiaeeg.com/" target="_blank">http://www.anesthesiaeeg.com</a>. He is also working to advance these ideas within professional societies.</p>
<p>Ultimately, as more anesthesiologists acquire knowledge and EEG equipment, and equipment makers produce better displays for their use, Brown said, the field can move to a model where doctors have a direct view of the patient’s brain when monitoring and maintaining their consciousness during surgery.</p>
As both a neuroscientist and an anesthesiologist, Emery Brown wants to make the brain the center of field.Photo: Len RubinsteinSchool of Science, Brain and cognitive sciences, Picower Institute for Learning and Memory, Institute for Medical Engineering and Science, Neuroscience, Special events and guest speakers, FacultyResearchers advance CRISPR-based tool for diagnosing diseasehttp://news.mit.edu/2018/researchers-advance-crispr-based-tool-diagnosing-disease-0215
With SHERLOCK, a strip of paper can now indicate presence of pathogens, tumor DNA, or any genetic signature of interest.Thu, 15 Feb 2018 14:00:00 -0500Broad Institutehttp://news.mit.edu/2018/researchers-advance-crispr-based-tool-diagnosing-disease-0215<p>The team that <a href="https://www.broadinstitute.org/node/22861">first unveiled</a> the rapid, inexpensive, highly sensitive CRISPR-based diagnostic tool called SHERLOCK has greatly enhanced the tool’s power, and has developed a miniature paper test that allows results to be seen with the naked eye — without the need for expensive equipment.</p>
<p>The SHERLOCK team developed a simple paper strip to display test results for a single genetic signature, borrowing from the visual cues common in pregnancy tests. After dipping the paper strip into a processed sample, a line appears, indicating whether the target molecule was detected or not.</p>
<p><img alt="" src="/sites/mit.edu.newsoffice/files/Sherlock_Animated.gif" /></p>
<p>This new feature helps pave the way for field use, such as during an outbreak. The team has also increased the sensitivity of SHERLOCK and added the capacity to accurately quantify the amount of target in a sample and test for multiple targets at once. All together, these advancements accelerate SHERLOCK’s ability to quickly and precisely detect genetic signatures — including pathogens and tumor DNA — in samples.</p>
<p>Described today in <em>Science</em>, the innovations build on the team’s earlier version of SHERLOCK (shorthand for Specific High Sensitivity Reporter unLOCKing) and add to a growing field of research that harnesses CRISPR systems for uses beyond gene editing. The work, led by researchers from the Broad Institute of MIT and Harvard and from MIT, has the potential for a transformative effect on research and global public health.</p>
<p>“SHERLOCK provides an inexpensive, easy-to-use, and sensitive diagnostic method for detecting nucleic acid material — and that can mean a virus, tumor DNA, and many other targets,” said senior author <a href="https://www.broadinstitute.org/node/7577">Feng Zhang</a>, a core institute member of the Broad Institute, an investigator at the McGovern Institute, and the James and Patricia Poitras ’63 Professor in Neuroscience and associate professor in the departments of Brain and Cognitive Sciences and Biological Engineering at MIT. “The SHERLOCK improvements now give us even more diagnostic information and put us closer to a tool that can be deployed in real-world applications.”</p>
<p>The researchers previously showcased SHERLOCK’s utility for a range of applications. In the new study, the team uses SHERLOCK to detect cell-free tumor DNA in blood samples from lung cancer patients and to detect synthetic Zika and Dengue virus simultaneously, in addition to other demonstrations.</p>
<p><strong>Clear results on a paper strip</strong></p>
<p>“The new paper readout for SHERLOCK lets you see whether your target was present in the sample, without instrumentation,” said co-first author Jonathan Gootenberg, a Harvard graduate student in Zhang’s lab as well as the lab of Broad core institute member Aviv Regev. “This moves us much closer to a field-ready diagnostic.”</p>
<p>The team envisions a wide range of uses for SHERLOCK, thanks to its versatility in nucleic acid target detection. “The technology demonstrates potential for many health care applications, including diagnosing infections in patients and detecting mutations that confer drug resistance or cause cancer, but it can also be used for industrial and agricultural applications where monitoring steps along the supply chain can reduce waste and improve safety,” added Zhang.</p>
<p>At the core of SHERLOCK’s success is a CRISPR-associated protein called Cas13, which can be programmed to bind to a specific piece of RNA. Cas13’s target can be any genetic sequence, including viral genomes, genes that confer antibiotic resistance in bacteria, or mutations that cause cancer. In certain circumstances, once Cas13 locates and cuts its specified target, the enzyme goes into overdrive, indiscriminately cutting other RNA nearby. To create SHERLOCK, the team harnessed this “off-target” activity and turned it to their advantage, engineering the system to be compatible with both DNA and RNA.</p>
<p>SHERLOCK’s diagnostic potential relies on additional strands of synthetic RNA that are used to create a signal after being cleaved. Cas13 will chop up this RNA after it hits its original target, releasing the signaling molecule, which results in a readout that indicates the presence or absence of the target.</p>
<p><strong>Multiple targets and increased sensitivity</strong></p>
<p>The SHERLOCK platform can now be adapted to test for multiple targets. SHERLOCK initially could only detect one nucleic acid sequence at a time, but now one analysis can give fluorescent signals for up to four different targets at once — meaning less sample is required to run through diagnostic panels. For example, the new version of SHERLOCK can determine in a single reaction whether a sample contains Zika or dengue virus particles, which both cause similar symptoms in patients. The platform uses Cas13 and Cas12a (previously known as Cpf1) enzymes from different species of bacteria to generate the additional signals.</p>
<p>SHERLOCK’s second iteration also uses an additional CRISPR-associated enzyme to amplify its detection signal, making the tool more sensitive than its predecessor. “With the original SHERLOCK, we were detecting a single molecule in a microliter, but now we can achieve 100-fold greater sensitivity,” explained co-first author Omar Abudayyeh, an MIT graduate student in Zhang’s lab at Broad. “That’s especially important for applications like detecting cell-free tumor DNA in blood samples, where the concentration of your target might be extremely low. This next generation of features help make SHERLOCK a more precise system.”</p>
<p>The authors have made their reagents available to the academic community through Addgene and their software tools can be accessed via the Zhang lab website and GitHub.</p>
<p>This study was supported in part by the National Institutes of Health and the Defense Threat Reduction Agency.</p>
A collection of SHERLOCK paper test strips. (Left) Unused paper strips, (middle) paper tests displaying a negative SHERLOCK readout, and (right) paper tests displaying a positive SHERLOCK readout.
Images: Zhang lab/Broad Institute of MIT and HarvardCRISPR, Genome editing, DNA, RNA, Genetics, Research, Biological engineering, Broad Institute, McGovern Institute, Brain and cognitive sciences, School of Science, School of Engineering, Medicine, Disease, Cancer, Health sciences and technologyStudy: Fragile X syndrome neurons can be restored http://news.mit.edu/2018/mit-whitehead-fragile-x-syndrome-neurons-restored-crispr-0215
Whitehead Institute researchers are using a modified CRISPR/Cas9-guided activation strategy to investigate the most frequent cause of intellectual disability in males.Thu, 15 Feb 2018 12:00:00 -0500Nicole Giese Rura | Whitehead Institutehttp://news.mit.edu/2018/mit-whitehead-fragile-x-syndrome-neurons-restored-crispr-0215<p>Fragile X syndrome is the most frequent cause of intellectual disability in males, affecting one&nbsp;out of every 3,600 boys born. The syndrome can also cause autistic traits, such as social and communication deficits, as well as attention problems&nbsp;and hyperactivity. Currently, there is no cure for this disorder.</p>
<p>Fragile X syndrome is caused by mutations in the&nbsp;FMR1&nbsp;gene on the X chromosome, which prevent the gene’s expression. This absence of the&nbsp;FMR1-encoded protein during brain development has been shown to cause the overexcitability in neurons associated with the syndrome. Now, for the first time, researchers at Whitehead Institute have restored activity to the fragile X syndrome gene in affected neurons using a modified CRISPR/Cas9 system they developed that removes the methylation — the molecular tags that keep the mutant gene shut off — suggesting that this method may prove to be a useful paradigm for targeting diseases caused by abnormal methylation.</p>
<p>Research by the lab of Whitehead Institute&nbsp;<span style="font-family: Arial; font-size: 13.3333px; white-space: pre-wrap;">for Biomedical Research </span>Founding Member <a href="http://wi.mit.edu/people/faculty/jaenisch">Rudolf Jaenisch</a>, which is described online this week in the journal&nbsp;<em>Cell</em>, is the first direct evidence that removing the methylation from a specific segment within the&nbsp;FMR1<em>&nbsp;</em>locus can reactivate the gene and rescue fragile X syndrome neurons.</p>
<p>The&nbsp;FMR1&nbsp;gene sequence includes a series of three nucleotide (CGG) repeats, and the length of these repeats determines whether or not a person will develop fragile X syndrome: A normal version of the gene contains anywhere from 5 to 55 CGG repeats, versions with 56 to 200 repeats are considered to be at a higher risk of generating some of the syndrome’s symptoms, and those versions with more than 200 repeats will produce fragile X syndrome.</p>
<p>Until now, the mechanism linking the excessive repeats in&nbsp;FMR1&nbsp;to fragile X syndrome was not well-understood. But Shawn Liu, a postdoc&nbsp;in Jaenisch’s lab and first author of the&nbsp;<em>Cell</em>&nbsp;study, and others thought that the methylation blanketing those nucleotide repeats might play an important role in shutting down the gene’s expression.</p>
<p>In order to test this hypothesis, Liu removed the methylation tags from the&nbsp;FMR1&nbsp;repeats using a CRISPR/Cas9-based technique he recently developed with Hao Wu, a postdoc&nbsp;in the Jaenisch lab. This technique can either add or delete methylation tags from specific stretches of DNA. Removal of the tags revived the&nbsp;FMR1&nbsp;gene’s expression to the level of the normal gene.</p>
<p>“These results are quite surprising — this work produced almost a full restoration of wild type expression levels of the&nbsp;FMR1&nbsp;gene,” says Jaenisch, whose primary affiliation is with Whitehead Institute, where his laboratory is located and his research is conducted. He is also a professor of biology at MIT. “Often when scientists test therapeutic interventions, they only achieve partial restoration, so these results are substantial,” he says.</p>
<p>The reactivated&nbsp;FMR1&nbsp;gene rescues neurons derived from fragile X syndrome induced pluripotent stem (iPS) cells, reversing the abnormal electrical activity associated with the syndrome. When rescued neurons were engrafted into the brains of mice, the&nbsp;FMR1&nbsp;gene remained active in the neurons for at least three months, suggesting that the corrected methylation may be sustainable in the animal.</p>
<p>“We showed that this disorder is reversible at the neuron level,” says Liu. “When we removed methylation of CGG repeats in the neurons derived from fragile X syndrome iPS cells, we achieved full activation of&nbsp;FMR1.”</p>
<p>The CRISPR/Cas-9-based technique may also prove useful for other diseases caused by abnormal methylation including facioscapulohumeral muscular dystrophy and imprinting diseases.</p>
<p>“This work validates the approach of targeting the methylation on genes, and it will be a paradigm for scientists to follow this approach for other diseases,” says Jaenisch.</p>
<p>This work was supported by the National Institutes of Health, the Damon Runyon Cancer Foundation, the Rett Syndrome Research Trust, the Brain and Behavior Research Foundation, and the Helen Hay Whitney Foundation. Jaenisch is co-founder of Fate Therapeutics, Fulcrum Therapeutics, and Omega Therapeutics.</p>
New Whitehead Institute research may prove to be a useful paradigm for targeting diseases caused by abnormal methylation.Illustration: Steven Lee/Whitehead InstituteSchool of Science, Biology, Whitehead Institute, Brain and cognitive sciences, CRISPR, Disease, Genetics, National Institutes of Health (NIH), Research, DNABack-and-forth exchanges boost children’s brain response to languagehttp://news.mit.edu/2018/conversation-boost-childrens-brain-response-language-0214
Study finds engaging young children in conversation is more important for brain development than “dumping words” on them.Tue, 13 Feb 2018 23:59:59 -0500Anne Trafton | MIT News Officehttp://news.mit.edu/2018/conversation-boost-childrens-brain-response-language-0214<p>A landmark 1995 study found that children from higher-income families hear about 30 million more words during their first three years of life than children from lower-income families. This “30-million-word gap” correlates with significant differences in tests of vocabulary, language development, and reading comprehension.</p>
<p>MIT cognitive scientists have now found that conversation between an adult and a child appears to change the child’s brain, and that this back-and-forth conversation is actually more critical to language development than the word gap. In a study of children between the ages of 4 and 6, they found that differences in the number of “conversational turns” accounted for a large portion of the differences in brain physiology and language skills that they found among the children. This finding applied to children regardless of parental income or education.</p>
<p>The findings suggest that parents can have considerable influence over their children’s language and brain development by simply engaging them in conversation, the researchers say.</p>
<p>“The important thing is not just to talk to your child, but to talk with your child. It’s not just about dumping language into your child’s brain, but to actually carry on a conversation with them,” says Rachel Romeo, a graduate student at Harvard and MIT and the lead author of the paper, which appears in the Feb. 14 online edition of <em>Psychological Science</em>.</p>
<div class="cms-placeholder-content-video"></div>
<p>Using functional magnetic resonance imaging (fMRI), the researchers identified differences in the brain’s response to language that correlated with the number of conversational turns. In children who experienced more conversation, Broca’s area, a part of the brain involved in speech production and language processing, was much more active while they listened to stories. This brain activation then predicted children’s scores on language assessments, fully explaining the income-related differences in children’s language skills.&nbsp;</p>
<p>“The really novel thing about our paper is that it provides the first evidence that family conversation at home is associated with brain development in children. It’s almost magical how parental conversation appears to influence the biological growth of the brain,” says John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, a professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.</p>
<p><strong>Beyond the word gap</strong></p>
<p>Before this study, little was known about how the “word gap” might translate into differences in the brain. The MIT team set out to find these differences by comparing the brain scans of children from different socioeconomic backgrounds.</p>
<p>As part of the study, the researchers used a system called Language Environment Analysis (LENA) to record every word spoken or heard by each child. Parents who agreed to have their children participate in the study were told to have their children wear the recorder for two days, from the time they woke up until they went to bed.</p>
<p>The recordings were then analyzed by a computer program that yielded three measurements: the number of words spoken by the child, the number of words spoken to the child, and the number of times that the child and an adult took a “conversational turn” — a back-and-forth exchange initiated by either one.</p>
<p>The researchers found that the number of conversational turns correlated strongly with the children’s scores on standardized tests of language skill, including vocabulary, grammar, and verbal reasoning. The number of conversational turns also correlated with more activity in Broca’s area, when the children listened to stories while inside an fMRI scanner.</p>
<p>These correlations were much stronger than those between the number of words heard and language scores, and between the number of words heard and activity in Broca’s area.</p>
<p>This result aligns with other recent findings, Romeo says, “but there’s still a popular notion that there’s this 30-million-word gap, and we need to dump words into these kids — just talk to them all day long, or maybe sit them in front of a TV that will talk to them. However, the brain data show that it really seems to be this interactive dialogue that is more strongly related to neural processing.”</p>
<p>The researchers believe interactive conversation gives children more of an opportunity to practice their communication skills, including the ability to understand what another person is trying to say and to respond in an appropriate way.</p>
<p>While children from higher-income families were exposed to more language on average, children from lower-income families who experienced a high number of conversational turns had language skills and Broca’s area brain activity similar to those of children who came from higher-income families.</p>
<p>“In our analysis, the conversational turn-taking seems like the thing that makes a difference, regardless of socioeconomic status. Such turn-taking occurs more often in families from a higher socioeconomic status, but children coming from families with lesser income or parental education showed the same benefits from conversational turn-taking,” Gabrieli says.</p>
<p><strong>Taking action</strong></p>
<p>The researchers hope their findings will encourage parents to engage their young children in more conversation. Although this study was done in children age 4 to 6, this type of turn-taking can also be done with much younger children, by making sounds back and forth or making faces, the researchers say.</p>
<p>“One of the things we’re excited about is that it feels like a relatively actionable thing because it’s specific. That doesn’t mean it’s easy for less educated families, under greater economic stress, to have more conversation with their child. But at the same time, it’s a targeted, specific action, and there may be ways to promote or encourage that,” Gabrieli says. &nbsp;</p>
<p>Roberta Golinkoff, a professor of education at the University of Delaware School of Education, says the new study presents an important finding that adds to the evidence that it’s not just the number of words children hear that is significant for their language development.</p>
<p>“You can talk to a child until you’re blue in the face, but if you’re not engaging with the child and having a conversational duet about what the child is interested in, you’re not going to give the child the language processing skills that they need,” says Golinkoff, who was not involved in the study. “If you can get the child to participate, not just listen, that will allow the child to have a better language outcome.”</p>
<p>The MIT researchers now hope to study the effects of possible interventions that incorporate more conversation into young children’s lives. These could include technological assistance, such as computer programs that can converse or electronic reminders to parents to engage their children in conversation.</p>
<p>The research was funded by the Walton Family Foundation, the National Institute of Child Health and Human Development, a Harvard Mind Brain Behavior Grant, and a gift from David Pun Chan.</p>
MIT cognitive scientists have found that conversation between an adult and a child appears to change the child’s brain.
Research, Language, Learning, Brain and cognitive sciences, McGovern Institute, School of Science, Health sciences and technology, National Institutes of Health (NIH)MIT neuroscientists give &quot;invisible&quot; cells a new lookhttp://news.mit.edu/2018/new-grant-mit-neuroscientists-will-give-invisible-cells-new-look-0212
With a new grant, a Picower Institute team is studying the role of astrocytes, which may partner with neurons to process information in the brain.Mon, 12 Feb 2018 16:05:01 -0500David Orenstein | Picower Institute for Learning and Memoryhttp://news.mit.edu/2018/new-grant-mit-neuroscientists-will-give-invisible-cells-new-look-0212<p>Neurons are the star of the show in brain science, but MIT researchers believe they don’t work alone to process information.</p>
<p>In new research funded by a $1.9 million grant from the National Institutes of Health, a team at MIT’s Picower Institute for Learning and Memory is working&nbsp;to uncover the likely crucial role of a&nbsp;supporting cast member with a stellar-sounding name: the astrocyte. The work could ultimately provide insight into many brain disorders.</p>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<p>Astrocytes are at least as abundant in the brain as neurons, but because they don’t spike&nbsp;with electrical impulses like neurons do, they’ve essentially been “invisible” in studies of how brain circuits process information, says&nbsp;<a href="http://picower.mit.edu/mriganka-sur">Mriganka Sur</a>, the Newton Professor of Neuroscience in the Department of Brain and Cognitive Sciences and director of the Simons Center for the Social Brain at MIT. Astrocytes have instead been appreciated mostly for shuttling various molecules and ions around to keep the brain’s biochemistry balanced and functioning.</p>
<p>While they don’t spike, astrocytes do signal their activity with increases of calcium. A decade ago&nbsp;in&nbsp;<em><a href="http://science.sciencemag.org/content/320/5883/1638" rel="nofollow" target="_blank">Science</a></em>, Sur and colleagues used that insight to discover that astrocyte activity in the visual cortex, the part of the brain that processes vision, matched in lock-step with the activity of neurons in response to visual stimuli. That suggested that astrocytes make a vital contribution to vision processing. In&nbsp;<a href="https://projectreporter.nih.gov/project_info_description.cfm?aid=9457908&amp;icde=0" rel="nofollow" target="_blank">the new study</a>, Sur’s lab will investigate exactly what astrocytes are doing, for instance, to regulate the formation of neural connections called synapses and how the calcium activity arises&nbsp;and what difference that activity makes. They’ll look not only during the course of normal vision, but also during the critical period early in&nbsp;life when vision is first developing.</p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<p>Using sophisticated and precise imaging tools, Sur’s team will monitor astrocyte and neuron activity in the visual cortex as mice see different stimuli. They’ll also use genetic and pharmaceutical tools to manipulate astrocyte activity. A key mechanism that’s likely involved, Sur says, is the way astrocytes deploy a molecule called GLT1 to regulate the level and time course of the neurotransmitter glutamate. Glutamate is vital because it mediates communication between neurons across synapses. By systematically manipulating the GLT1 activity of astrocytes in the visual cortex and measuring the effects, Sur says, the team will be able to determine how astrocytes contribute to the performance and formation of neural circuits.</p>
<p>“Just as neurons have their spiking code, we think there is an astrocyte calcium code that reflects and works in partnership with neurons,” Sur says. “That’s totally underappreciated but very important.”</p>
<p>The results will matter for more than just vision, Sur says. The visual cortex is a perfect model system in which to work, he says, but astrocytes are also believed to be important, if poorly understood, in disorders as wide-ranging as Alzheimer’s disease and developmental disorders such as schizophrenia and autism.</p>
<p>“Astrocytes are emerging as a major player because disorders of brain development have genetic origins,” Sur says. “Genes expressed in astrocytes are emerging as very important risk factors for autism and schizophrenia.”</p>
<p>The new grant from the National Eye Institute (grant number&nbsp;<a href="https://projectreporter.nih.gov/project_info_description.cfm?aid=9457908&amp;icde=0" rel="nofollow" target="_blank">R01EY028219</a>) lasts for four years.</p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
In this image of a mouse visual cortex, astrocytes (stained red) appear about as abundant as neurons (green).Image courtesy of Rodrigo Garcia/Picower InstituteSchool of Science, Brain and cognitive sciences, Picower Institute, Neuroscience, Vision, National Institutes of Health (NIH), ResearchStudy reveals molecular mechanisms of memory formationhttp://news.mit.edu/2018/study-reveals-molecular-mechanisms-memory-formation-0208
Neuroscientists discover a cellular pathway that encodes memories by strengthening specific synapses.Thu, 08 Feb 2018 11:59:59 -0500Anne Trafton | MIT News Officehttp://news.mit.edu/2018/study-reveals-molecular-mechanisms-memory-formation-0208<p>MIT neuroscientists have uncovered a cellular pathway that allows specific synapses to become stronger during memory formation. The findings provide the first glimpse of the molecular mechanism by which long-term memories are encoded in a region of the hippocampus called CA3.</p>
<p>The researchers found that a protein called Npas4, previously identified as a master controller of gene expression triggered by neuronal activity, controls the strength of connections between neurons in the CA3 and those in another part of the hippocampus called the dentate gyrus. Without Npas4, long-term memories cannot form.</p>
<p>“Our study identifies an experience-dependent synaptic mechanism for memory encoding in CA3, and provides the first evidence for a molecular pathway that selectively controls it,” says Yingxi Lin, an associate professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.</p>
<p>Lin is the senior author of the study, which appears in the Feb. 8 issue of <em>Neuron</em>. The paper’s lead author is McGovern Institute research scientist Feng-Ju (Eddie) Weng.</p>
<p><strong>Synaptic strength</strong></p>
<p>Neuroscientists have long known that the brain encodes memories by altering the strength of synapses, or connections between neurons. This requires interactions of many proteins found in both presynaptic neurons, which send information about an event, and postsynaptic neurons, which receive the information.</p>
<p>Neurons in the CA3 region play a critical role in the formation of contextual memories, which are memories that link an event with the location where it took place, or with other contextual information such as timing or emotions. These neurons receive synaptic inputs from three different pathways, and scientists have hypothesized that one of these inputs, from the dentate gyrus, is critical for encoding new contextual memories. However, the mechanism of how this information is encoded was not known.</p>
<p>In a study <a href="http://news.mit.edu/2011/hippocampus-memory-genes-1222">published in 2011</a>, Lin and colleagues found that Npas4, a gene that is turned on immediately following new experiences, appears to act as a master controller of the program of gene expression required for long-term memory formation. They also found that Npas4 is most active in the CA3 region of the hippocampus during learning. This activity was already known to be required for fast contextual learning, such is required during a type of task known as contextual fear conditioning. During the conditioning, mice receive a mild electric shock when they enter and explore a specific chamber. Within minutes, the mice learn to fear the chamber, and the next time they enter it, they freeze.</p>
<p>When the researchers knocked out the Npas4 gene, they found that mice could not remember the fearful event. They also found the same effect when they knocked out the gene just in the CA3 region of the hippocampus. Knocking it out in other parts of the hippocampus, however, had no effect on memory.</p>
<p>In the new study, the researchers explored in further detail how Npas4 exerts its effects. Lin’s lab had previously developed a method that makes it possible to fluorescently label CA3 neurons that are activated during this fear conditioning. Using the same fear conditioning process, the researchers showed that during learning, certain synaptic inputs to CA3 neurons are strengthened, but not others. Furthermore, this strengthening requires Npas4.</p>
<p>The inputs that are selectively strengthened come from another part of the hippocampus called the dentate gyrus. These signals convey information about the location where the fearful experience took place.</p>
<p>Without Npas4, synapses coming from the dentate gyrus to CA3 failed to strengthen, and the mice could not form memories of the event. Further experiments revealed that this strengthening is required specifically for memory encoding, not for retrieving memories already formed. The researchers also found that Npas4 loss did not affect synaptic inputs that CA3 neurons receive from other sources.</p>
<p>Kimberly Raab-Graham, an associate professor of physiology and pharmacology at Wake Forest University School of Medicine, says the researchers used an impressive variety of techniques to unequivocally show that contextual memory formation is tightly controlled by Npas4.</p>
<p>“The major finding of the study is that contextual memory is driven by a single circuit and comes down to a single transcription factor,” says Raab-Graham, who was not involved in the study. “When they knocked out the transcription factor, they removed contextual memory formation, and they could restore it by adding the transcription factor.”</p>
<p><strong>Synapse maintenance</strong></p>
<p>The researchers also identified one of the genes that Npas4 controls to exert this effect on synapse strength. This gene, known as plk2, is involved in shrinking postsynaptic structures. Npas4 turns on plk2, thereby reducing synapse size and strength. This suggests that Npas4 itself does not strengthen synapses, but maintains synapses in a state that allows them to be strengthened when necessary. Without Npas4, synapses become too strong and therefore cannot be induced to encode memories by further strengthening them.</p>
<p>“When you take out Npas4, the synaptic strength is almost saturated,” Lin says. “And then when learning takes place, although the memory-encoding cells can be fluorescently labeled, you no longer see the strengthening of those connections.”</p>
<p>In future work, Lin hopes to study how the circuit connecting the dentate gyrus to CA3 interacts with other pathways required for memory retrieval. “Somehow there’s some crosstalk between different pathways so that once the information is stored, it can be retrieved by the other inputs,” she says.</p>
<p>The research was funded by the National Institutes of Health, the James H. Ferry Fund, and a Swedish Brain Foundation Research Fellowship.</p>
This image shows neurons in the CA3 region of the hippocampus, which is important for memory encoding and retrieval. Nuclei of the CA3 neurons are labeled in green and their dendrites in red, and the smaller specks of green above represent axons projecting to the CA3 neurons from the dentate gyrus.
Image: Lin LabResearch, Brain and cognitive sciences, McGovern Institute, School of Science, National Institutes of Health (NIH), Neuroscience, MemoryDistinctive brain pattern helps habits formhttp://news.mit.edu/2018/distinctive-brain-pattern-helps-habits-form-0208
Study identifies neurons that fire at the beginning and end of a behavior as it becomes a habit.Thu, 08 Feb 2018 11:59:59 -0500Anne Trafton | MIT News Officehttp://news.mit.edu/2018/distinctive-brain-pattern-helps-habits-form-0208<p>Our daily lives include hundreds of routine habits. Brushing our teeth, driving to work, or putting away the dishes are just a few of the tasks that our brains have automated to the point that we hardly need to think about them.</p>
<p>Although we may think of each of these routines as a single task, they are usually made up of many smaller actions, such as picking up our toothbrush, squeezing toothpaste onto it, and then lifting the brush to our mouth. This process of grouping behaviors together into a single routine is known as “chunking,” but little is known about how the brain groups these behaviors together.</p>
<p>MIT neuroscientists have now found that certain neurons in the brain are responsible for marking the beginning and end of these chunked units of behavior. These neurons, located in a brain region highly involved in habit formation, fire at the outset of a learned routine, go quiet while it is carried out, then fire again once the routine has ended.</p>
<p>This task-bracketing appears to be important for initiating a routine and then notifying the brain once it is complete, says Ann Graybiel, an Institute Professor at MIT, a member of the McGovern Institute for Brain Research, and the senior author of the study.</p>
<p>Nuné Martiros, a recent MIT PhD recipient who is now a postdoc at Harvard University, is the lead author of the paper, which appears in the Feb. 8 issue of <em>Current Biology</em>. Alexandra Burgess, a recent MIT graduate and technical associate at the McGovern Institute, is also an author of the paper.</p>
<p><strong>Routine activation</strong></p>
<p>Graybiel has previously shown that a part of the brain called the striatum, which is found in the basal ganglia, plays a major role in habit formation. Several years ago, she and her group found that neuron firing patterns in the striatum change as animals learn a new habit, such as turning to the right or left in a maze upon hearing a certain tone.</p>
<p>When the animal is just starting to learn the maze, these neurons fire continuously throughout the task. However, as the animal becomes better at making the correct turn to receive a reward, the firing becomes clustered at the very beginning of the task and at the very end. Once these patterns form, it becomes extremely difficult to break the habit.</p>
<p>However, these previous studies did not rule out other explanations for the pattern, including the possibility that it might be related to the motor commands required for the maze-running behavior. In the new study, Martiros and Graybiel set out to determine whether this firing pattern could be conclusively linked with the chunking of habitual behavior.</p>
<p>The researchers trained rats to press two levers in a particular sequence, for example, 1-2-2 or 2-1-2. The rats had to figure out what the correct sequence was, and if they did, they received a chocolate milk reward. It took several weeks for them to learn the task, and as they became more accurate, the researchers saw the same beginning-and-end firing patterns develop in the striatum that they had seen in their previous habit studies.</p>
<p>Because each rat learned a different sequence, the researchers could rule out the possibility that the patterns correspond to the motor input required to preform a particular series of movements. This offers strong evidence that the firing pattern corresponds specifically to the initiation and termination of a learned routine, the researchers say.</p>
<p>“I think this more or less proves that the development of bracketing patterns serves to package up a behavior that the brain — and the animals — consider valuable and worth keeping in their repertoire. It really is a high-level signal that helps to release that habit, and we think the end signal says the routine has been done,” Graybiel says.</p>
<p><strong>Distinctive patterns</strong></p>
<p>The researchers also discovered a distinct pattern in a set of inhibitory neurons in the striatum. Activity in these neurons, known as interneurons, displayed a strong inverse relationship with the activity of the excitatory neurons that produce the bracketing pattern.</p>
<p>“The interneurons were activated during the time when the rats were in the middle of performing the learned sequence, and could possibly be preventing the principal neurons from initiating another routine until the current one was finished. The discovery of this opposite activity by the interneurons also gets us one step closer to understanding how brain circuits can actually produce this pattern of activity,” Martiros says.</p>
<p>Graybiel’s lab is now investigating further how the interaction between these two groups of neurons helps to encode habitual behavior in the striatum.</p>
<p>The research was funded by the National Institutes of Health/National Institute of Mental Health, the Office of Naval Research, and a McGovern Institute Mark Gorenberg Fellowship.</p>
Our daily lives include hundreds of routine habits, made up of many smaller actions, such as picking up our toothbrush, squeezing toothpaste onto it, and then lifting the brush to our mouth. This process of grouping behaviors together into a single routine is known as “chunking.” MIT neuroscientists have now found that certain neurons in the brain are responsible for marking the beginning and end of these chunked units of behavior.
Image: Chelsea Turner/MITResearch, Brain and cognitive sciences, McGovern Institute, School of Science, Neuroscience, National Institutes of Health (NIH), Memory, BehaviorDecoding human cognitionhttp://news.mit.edu/2018/student-profile-liang-zhou-0202
MIT senior and Marshall Scholar Liang Zhou wants to elucidate the neural basis for our thoughts and intuitions.Thu, 01 Feb 2018 23:59:59 -0500Fatima Husain | MIT News correspondenthttp://news.mit.edu/2018/student-profile-liang-zhou-0202<p>Halfway through his sophomore year, Liang Zhou made a decision that changed the course of his academic career. The electrical engineering and computer science major enrolled in three classes in brain and cognitive sciences, an area he hadn’t studied before. Though he can’t pinpoint his exact reasoning for doing so at the time — perhaps it was intuition —&nbsp; he has no regrets.</p>
<p>As he begins his last semester at MIT, Zhou is “really, really glad” he made that choice, and is now completing a double major in electrical engineering and computer science and brain and cognitive sciences.</p>
<p>“I’m interested in studying computer science because of its wealth of applications in other domains,” Zhou says. Take neuroscience: “It’s about the brain and it’s about how we as people fundamentally think about things, so understanding others. It’s a way to connect both the very hard science and rigor to something that’s very relevant to everybody.”</p>
<p>As he prepares to pursue a master’s degree in computational neuroscience at the University College London as a Marshall Scholar, the native of Riverside, California, can clearly articulate his research motivation: “I am fascinated by how people work, why we do what we do, and why we think what we think.”</p>
<p><strong>Intuitive physics</strong></p>
<p>For Zhou, his intense introduction to neuroscience marked “a paradigm shift.” While he began to study the brain in his classes, Zhou also opted to pursue research in a field where he could apply his coding skills. He found a great fit: the Computational Cognitive Science Laboratory, led by Professor Josh Tenenbaum. The lab group uses a combination of computer modeling and behavioral experiments to understand the basis of human learning.</p>
<p>Under the mentorship of Tobias Gerstenberg, a postdoc in the Tenenbaum lab, Zhou &nbsp;has worked on projects that investigate how people perceive their environments.</p>
<p>Zhou began with a research project that asked subjects to assess the structural stability of a brick tower. Subjects were asked what would happen to the brick tower if certain blocks were removed, and to assess how important single bricks were for the stability of the overall structure. “We calculated responsibilities for different brick configurations and compared them to people’s assessments, which gave us a better notion for people’s judgements about stability,” Zhou says.</p>
<p>What fascinated Zhou most was the disconnect between the subjects’ judgements and the ground truth. “What people think will happen is usually not what happens in terms of those bricks, so we created a model … which was better in line with people’s predictions and not the ground truth.” That model, called the hypothetical simulation model, was detailed in a conference paper for the 39th annual meeting of the Cognitive Science Society, which was held this past July.</p>
<p>That model and the disconnect it details led Zhou to his current research interest: understanding intuitive physics.</p>
<p>“When you take an apple and you drop it, you don’t think F=<em>ma</em>, therefore it’s going to drop with one Newton and it’s going to hit the ground at this time,” Zhou says. “You think, ‘Oh, it’s an apple and it’s going to fall.’ We have an intuition for how the world works, and we have a mental notion of how one physical event can cause others to happen … but we never explicitly learn this material, and we learn it so readily and so easily.”</p>
<p>Zhou wants to investigate how that intuition arises from neurons, and that begins with the dedicated study he plans to do in graduate school: “Computational neuroscience is a lot about modeling the actual neurons themselves. … In the future, I want to do research in cognitive neuroscience. I think it’s very important to have a very solid statistical and mathematical background, and I hope that studying computational neuroscience will give me that.”&nbsp;</p>
<p><strong>Understanding impacts</strong></p>
<p>In the future, Zhou sees himself continuing his research and sharing its applications.</p>
<p>“There’s no better way to actually contribute to this field of research than to be in academia and to do research, but I also realize that research isn’t particularly useful if it’s not applied to something,” Zhou says. “With this line of research, there are so many applications in society. With a better understanding of how we think … we could have a better sense for what it means to understand somebody.”</p>
<p>Zhou also cites recent research in behavioral economics that highlights the surprising strength of irrational thinking. “We assume people are rational, and we assume people are logical, but they are really not. And [knowing] that helps you create better models of how people work.”</p>
<p>He envisions these types of models informing future education and public policy. “This research isn’t going anywhere if we don’t actually put it somewhere,” Zhou says. To share his vision, Zhou also hopes to become an active voice in science policy and public science advocacy. However, he points out that’s far in the future: “For the moment I just want to do a PhD for more in-depth and hands-on research experience, but I’d love to get involved in making this have tangible effects on the world in the future if there’s a path there for me.”</p>
<p>During his time at MIT, Zhou has also interned in software engineering at NextJump, Lucid Software, and Apple. Zhou served at conference chair for the 2017 EECScon, a U.S. undergraduate-led research conference. In addition to his research in the Tenenbaum group, he also performs computational neuroscience research at Harvard University, and has undertaken research projects in the MIT Media Lab and the Computer Science and Artificial Intelligence Laboratory through the Undergraduate Research Opportunities Program (UROP).</p>
<p>Zhou is an active member of the MIT Asian Dance Team, where he has served on the executive board for two years. He has also choreographed hip-hop pieces for MIT DanceTroupe. Additionally, Zhou served as teaching assistant for both undergraduate- and graduate-level machine learning courses for MIT students. This past year, Zhou was awarded the Hans Lukas Teuber Award for Outstanding Academics.</p>
“I am fascinated by how people work, why we do what we do, and why we think what we think,” says senior and Marshall Scholar Liang Zhou.
Image: Ian MacLellanProfile, Students, Undergraduate, Awards, honors and fellowships, School of Science, Brain and cognitive sciences, Electrical Engineering & Computer Science (eecs), Neuroscience, Behavior, Volunteering, outreach, public servicePolina Anikeeva and Feng Zhang awarded 2018 Vilcek Prize http://news.mit.edu/2018/polina-anikeeva-and-feng-zhang-awarded-2018-vilcek-prize-0201
Prize recognizes contributions to biomedical research made by immigrant scientists.
Thu, 01 Feb 2018 09:00:01 -0500Julie Pryor | McGovern Institute for Brain Researchhttp://news.mit.edu/2018/polina-anikeeva-and-feng-zhang-awarded-2018-vilcek-prize-0201<p><a href="https://dmse.mit.edu/faculty/profile/anikeeva">Polina Anikeeva</a>, the Class of 1942 Associate Professor in the Department of Materials Science and Engineering and associate director of the Research Laboratory of Electronics, and <a href="https://mcgovern.mit.edu/principal-investigators/feng-zhang">Feng Zhang</a>, the James and Patricia Poitras ’63 Professor in Neuroscience at the McGovern Institute, have each been awarded a 2018 Vilcek Prize for Creative Promise in Biomedical Science. Awarded annually by the <a href="http://www.vilcek.org/">Vilcek Foundation</a>, the $50,000 prizes recognize younger immigrants who have demonstrated exceptional promise early in their careers.</p>
<p>“The Vilcek Prizes were established in appreciation of the immigrants who chose to dedicate their vision and talent to bettering American society,” says Rick Kinsel, president of the Vilcek Foundation. “This year’s prizewinners honor and continue that legacy with works of astounding, revolutionary importance.”</p>
<p>Polina Anikeeva, who was born in the former Soviet Union, earned her PhD in materials science and engineering at MIT in 2009 and now runs her own bioelectronics lab in the same department focused on the development of materials and devices that enable recording and manipulation of signaling processes within the nervous system. The Vilcek Foundation recognizes Anikeeva for “fashioning ingenious solutions to long-standing challenges in biomedical engineering” including the design of therapeutic devices for conditions such as Parkinson’s disease and spinal cord injury.</p>
<p>Feng Zhang, who is also a core member of the Broad Institute and an associate professor in the departments of Brain and Cognitive Sciences and Biological Engineering, is being recognized for his role in advancing optogenetics (a method for controlling brain activity with light) and developing molecular tools to edit the genome. Thanks to his leadership in inventing precise and efficient gene-editing technologies using CRISPR, Zhang's work has resulted in a "growing array of applications, such as uncovering the genetic underpinnings of diseases, ushering in gene therapies to cure heritable diseases, and improving agriculture."&nbsp;Zhang’s family immigrated to the United States from China when he was 11 years of age.&nbsp;</p>
<p>Anikeeva and Zhang will be among eight Vilcek prizewinners honored at an awards gala in New York City in April 2018.</p>
<p>The Vilcek Foundation was established in 2000 by Jan and Marica Vilcek, immigrants from the former Czechoslovakia. The mission of the foundation, to honor the contributions of immigrants to the United States and to foster appreciation of the arts and sciences, was inspired by the couple’s respective careers in biomedical science and art history, as well as their personal experiences and appreciation of the opportunities they received as newcomers to this country.</p>
From left to right: Polina Anikeeva, the Class of 1942 Associate Professor in the Department of Materials Science and Engineering and associate director of the Research Laboratory of Electronics, and Feng Zhang, the James and Patricia Poitras ’63 Professor in Neuroscience at the McGovern Institute, a core member of the Broad Institute, and an associate professor in the departments of Brain and Cognitive Sciences and Biological Engineering. Photos: Lillie Paquette/MIT School of EngineeringAwards, honors and fellowships, Faculty, School of Engineering, DMSE, Brain and cognitive sciences, Biological engineering, Broad Institute, Research Laboratory of Electronics, McGovern Institute, School of ScienceInstitute launches the MIT Intelligence Quest http://news.mit.edu/2018/mit-launches-intelligence-quest-0201
New Institute-wide initiative will advance human and machine intelligence research.Thu, 01 Feb 2018 01:00:00 -0500Peter Dizikes | MIT News Officehttp://news.mit.edu/2018/mit-launches-intelligence-quest-0201<p>MIT today announced the launch of the <a href="http://iq.mit.edu/">MIT Intelligence Quest</a>, an initiative to discover the foundations of human intelligence and drive the development of technological tools that can positively influence virtually every aspect of society.</p>
<p>The announcement was first made in a letter MIT President L. Rafael Reif sent to the Institute community.</p>
<p>At a time of rapid advances in intelligence research across many disciplines, the Intelligence Quest will encourage researchers to investigate the societal implications of their work as they pursue hard problems lying beyond the current horizon of what is known.</p>
<p>Some of these advances may be foundational in nature, involving new insight into human intelligence, and new methods to allow machines to learn effectively. Others may be practical tools for use in a wide array of research endeavors, such as disease diagnosis, drug discovery, materials and manufacturing design, automated systems, synthetic biology, and finance.</p>
<p>“Today we set out to answer two big questions, says President Reif. “How does human intelligence work, in engineering terms? And how can we use that deep grasp of human intelligence to build wiser and more useful machines, to the benefit of society?”</p>
<div class="cms-placeholder-content-video"></div>
<p><strong>MIT Intelligence Quest: The Core and The Bridge</strong></p>
<p>MIT is poised to lead this work through two linked entities within MIT Intelligence Quest. One of them, “The Core,” will advance the science and engineering of both human and machine intelligence. A key output of this work will be machine-learning algorithms. At the same time, MIT Intelligence Quest seeks to advance our understanding of human intelligence by using insights from computer science.</p>
<p>The second entity, “The Bridge” will be dedicated to the application of MIT discoveries in natural and artificial intelligence to all disciplines, and it will host state-of-the-art tools from industry and research labs worldwide.</p>
<p>The Bridge will provide a variety of assets to the MIT community, including intelligence technologies, platforms, and infrastructure; education for students, faculty, and staff about AI tools; rich and unique data sets; technical support; and specialized hardware.</p>
<p>Along with developing and advancing the technologies of intelligence, MIT Intelligence Quest researchers will also investigate the societal and ethical implications of advanced analytical and predictive tools. There are already active projects and groups at the Institute investigating autonomous systems, media and information quality, labor markets and the work of the future, innovation and the digital economy, and the role of AI in the legal system.</p>
<p>In all its activities, MIT Intelligence Quest is intended to take advantage of — and strengthen — the Institute’s culture of collaboration. MIT Intelligence Quest will connect and amplify existing excellence across labs and centers already engaged in intelligence research. It will also establish shared, central spaces conducive to group work, and its resources will directly support research.</p>
<p>“Our quest is meant to power world-changing possibilities,” says Anantha Chandrakasan, dean of the MIT School of Engineering and Vannevar Bush Professor of Electrical Engineering and Computer Science. Chandrakasan, in collaboration with Provost Martin Schmidt and all four of MIT’s other school deans, has led the development and establishment of MIT Intelligence Quest.</p>
<p>“We imagine preventing deaths from cancer by using deep learning for early detection and personalized treatment,” Chandrakasan continues. “We imagine artificial intelligence in sync with, complementing, and assisting our own intelligence. And we imagine every scientist and engineer having access to human-intelligence-inspired algorithms that open new avenues of discovery in their fields. Researchers across our campus want to push the boundaries of what’s possible.”</p>
<p><strong>Engaging energetically with partners</strong></p>
<p>In order to power MIT Intelligence Quest and achieve results that are consistent with its ambitions, the Institute will raise financial support through corporate sponsorship and philanthropic giving.</p>
<p>MIT Intelligence Quest will build on the model that was established with the MIT–IBM Watson AI Lab, which was <a href="http://news.mit.edu/2017/ibm-mit-joint-research-watson-artificial-intelligence-lab-0907">announced</a> in September 2017. MIT researchers will collaborate with each other and with industry on challenges that range in scale from the very broad to the very specific.</p>
<p>“In the short time since we began our collaboration with IBM, the lab has garnered tremendous interest inside and outside MIT, and it will be a vital part of MIT Intelligence Quest,” says President Reif.</p>
<p>John E. Kelly III, IBM senior vice president for cognitive solutions and research, says, “To take on the world’s greatest challenges and seize its biggest opportunities, we need to rapidly advance both AI technology and our understanding of human intelligence. Building on decades of collaboration — including our extensive joint MIT–IBM Watson AI Lab — IBM and MIT will together shape a new agenda for intelligence research and its applications. We are proud to be a cornerstone of this expanded initiative.”</p>
<p>MIT will seek to establish additional entities within MIT Intelligence Quest, in partnership with corporate and philanthropic organizations.</p>
<p><strong>Why MIT</strong></p>
<p>MIT has been on the frontier of intelligence research since the 1950s, when pioneers Marvin Minsky and John McCarthy helped establish the field of artificial intelligence.</p>
<p>MIT now has over 200 principal investigators whose research bears directly on intelligence. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT Department of Brain and Cognitive Sciences (BCS) — along with the McGovern Institute for Brain Research and the Picower Institute for Learning and Memory — collaborate on a range of projects. MIT is also home to the National Science Foundation–funded center for Brains, Minds and Machines (CBMM) — the only national center of its kind.</p>
<p>Four years ago, MIT launched the Institute for Data, Systems, and Society (IDSS) with a mission&nbsp;promoting data&nbsp;science, particularly in the&nbsp;context of social systems. It is&nbsp; anticipated that faculty and&nbsp;students from IDSS will play a&nbsp;critical role in this initiative.</p>
<p>Faculty from across the Institute will participate in the initiative, including researchers in the Media Lab, the Operations Research Center, the Sloan School of Management, the School of Architecture and Planning, and the School of Humanities, Arts, and Social Sciences.</p>
<p>“Our quest will amount to a journey taken together by all five schools at MIT,” says Provost Schmidt. “Success will rest on a shared sense of purpose and a mix of contributions from a wide variety of disciplines. I’m excited by the new thinking we can help unlock.”</p>
<p>At the heart of MIT Intelligence Quest will be collaboration among researchers in human and artificial intelligence.</p>
<p>“To revolutionize the field of artificial intelligence, we should continue to look to the roots of intelligence: the brain,” says James DiCarlo, department head and Peter de Florez Professor of Neuroscience in the Department of Brain and Cognitive Sciences. “By working with engineers and artificial intelligence researchers, human intelligence researchers can build models of the brain systems that produce intelligent behavior. The time is now, as model building at the scale of those brain systems is now possible. Discovering how the brain works in the language of engineers will not only lead to transformative AI — it will also illuminate entirely new ways to repair, educate, and augment our own minds.”</p>
<p>Daniela Rus, the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT, and director of CSAIL, agrees. MIT researchers, she says, “have contributed pioneering and visionary solutions for intelligence since the beginning of the field, and are excited to make big leaps to understand human intelligence and to engineer significantly more capable intelligent machines. Understanding intelligence will give us the knowledge to understand ourselves and to create machines that will support us with cognitive and physical work.”</p>
<p>David Siegel, who earned a PhD in computer science at MIT in 1991 pursuing research at MIT’s Artificial Intelligence Laboratory, and who is a member of the MIT Corporation and an advisor to the MIT Center for Brains, Minds, and Machines, has been integral to the vision and formation of MIT Intelligence Quest and will continue to help shape the effort. “Understanding human intelligence is one of the greatest scientific challenges,” he says, “one that helps us understand who we are while meaningfully advancing the field of artificial intelligence.” Siegel is co-chairman and a founder of Two Sigma Investments, LP.</p>
<p><strong>The fruits of research </strong></p>
<p>MIT Intelligence Quest will thus provide a platform for long-term research, encouraging the foundational advances of the future. At the same time, MIT professors and researchers may develop technologies with near-term value, leading to new kinds of collaborations with existing companies — and to new companies.</p>
<p>Some such entrepreneurial efforts could be supported by The Engine, an Institute initiative launched in October 2016 to support startup companies pursuing particularly ambitious goals.</p>
<p>Other innovations stemming from MIT Intelligence Quest could be absorbed into the innovation ecosystem surrounding the Institute — in Kendall Square, Cambridge, and the Boston metropolitan area. MIT is located in close proximity to a world-leading nexus of biotechnology and medical-device research and development, as well as a cluster of leading-edge technology firms that study and deploy machine intelligence.&nbsp;</p>
<p>MIT also has roots in centers of innovation elsewhere in the United States and around the world, through faculty research projects, institutional and industry collaborations, and the activities and leadership of its alumni. MIT Intelligence Quest will seek to connect to innovative companies and individuals who share MIT’s passion for work in intelligence.</p>
<p>Eric Schmidt, former executive chairman of Alphabet, has helped MIT form the vision for MIT Intelligence Quest. “Imagine the good that can be done by putting novel machine-learning tools in the hands of those who can make great use of them,” he says. “MIT Intelligence Quest can become a fount of exciting new capabilities.”</p>
<p>“I am thrilled by today’s news,” says President Reif. “Drawing on MIT’s deep strengths and signature values, culture, and history, MIT Intelligence Quest promises to make important contributions to understanding the nature of intelligence, and to harnessing it to make a better world.”</p>
<p>“MIT is placing a bet,” he says, “on the central importance of intelligence research to meeting the needs of humanity.”</p>
At a time of rapid advances in intelligence research across many disciplines, the MIT Intelligence Quest will encourage researchers to investigate the societal implications of their work as they pursue hard problems lying beyond the current horizon of what is known. Courtesy of MIT Intelligence QuestArtificial intelligence, Machine learning, Algorithms, Research, Computer science and technology, President L. Rafael Reif, Industry, Administration, School of Engineering, School of Science, School of Architecture and Planning, SHASS, Sloan School of Management, Computer Science and Artificial Intelligence Laboratory (CSAIL), Brain and cognitive sciences, McGovern Institute, Picower Institute, Center for Brains Minds and Machines, Media Lab, IDSSNew study reveals how brain waves control working memoryhttp://news.mit.edu/2018/new-study-reveals-how-brain-waves-control-working-memory-0126
Brain rhythms act as a gate for information entering and leaving the mind.Fri, 26 Jan 2018 05:00:00 -0500Anne Trafton | MIT News Officehttp://news.mit.edu/2018/new-study-reveals-how-brain-waves-control-working-memory-0126<p>MIT neuroscientists have found evidence that the brain’s ability to control what it’s thinking about relies on low-frequency brain waves known as beta rhythms.</p>
<p>In a memory task requiring information to be held in working memory for short periods of time, the MIT team found that the brain uses beta waves to consciously switch between different pieces of information. The findings support the researchers’ hypothesis that beta rhythms act as a gate that determines when information held in working memory is either read out or cleared out so we can think about something else. &nbsp;</p>
<p>“The beta rhythm acts like a brake, controlling when to express information held in working memory and allow it to influence behavior,” says Mikael Lundqvist, a postdoc at MIT’s Picower Institute for Learning and Memory and the lead author of the study.</p>
<p>Earl Miller, the Picower Professor of Neuroscience at the Picower Institute and in the Department of Brain and Cognitive Sciences, is the senior author of the study, which appears in the Jan. 26 issue of <em>Nature Communications</em>.</p>
<p><strong>Working in rhythm</strong></p>
<p>There are millions of neurons in the brain, and each neuron produces its own electrical signals. These combined signals generate oscillations known as brain waves, which vary in frequency. In a <a href="http://news.mit.edu/2016/bursts-neural-activity-brain-working-memory-0317">2016 study</a>, Miller and Lundqvist found that gamma rhythms are associated with encoding and retrieving sensory information.</p>
<p>They also found that when gamma rhythms went up, beta rhythms went down, and vice versa. Previous work in their lab had shown that beta rhythms are associated with “top-down” information such as what the current goal is, how to achieve it, and what the rules of the task are.</p>
<p>All of this evidence led them to theorize that beta rhythms act as a control mechanism that determines what pieces of information are allowed to be read out from working memory — the brain function that allows control over conscious thought, Miller says.</p>
<p>“Working memory is the sketchpad of consciousness, and it is under our control. We choose what to think about,” he says. “You choose when to clear out working memory and choose when to forget about things. You can hold things in mind and wait to make a decision until you have more information.”</p>
<p>To test this hypothesis, the researchers recorded brain activity from the prefrontal cortex, which is the seat of working memory, in animals trained to perform a working memory task. The animals first saw one pair of objects, for example, A followed by B. Then they were shown a different pair and had to determine if it matched the first pair. A followed by B would be a match, but not B followed by A, or A followed by C. After this entire sequence, the animals released a bar if they determined that the two sequences matched.</p>
<p>The researchers found that brain activity varied depending on whether the two pairs matched or not. As an animal anticipated the beginning of the second sequence, it held the memory of object A, represented by gamma waves. If the next object seen was indeed A, beta waves then went up, which the researchers believe clears object A from working memory. Gamma waves then went up again, but this time the brain switched to holding information about object B, as this was now the relevant information to determine if the sequence matched.</p>
<p>However, if the first object shown was not a match for A, beta waves went way up, completely clearing out working memory, because the animal already knew that the sequence as a whole could not be a match.</p>
<p>“The interplay between beta and gamma acts exactly as you would expect a volitional control mechanism to act,” Miller says. “Beta is acting like a signal that gates access to working memory. It clears out working memory, and can act as a switch from one thought or item to another.”</p>
<p><strong>A new model</strong></p>
<p>Previous models of working memory proposed that information is held in mind by steady neuronal firing. The new study, in combination with their earlier work, supports the researchers’ new hypothesis that working memory is supported by brief episodes of spiking, which are controlled by beta rhythms.</p>
<p>“When we hold things in working memory (i.e. hold something ‘in mind’), we have the feeling that they are stable, like a light bulb that we’ve turned on to represent some thought.&nbsp;For a long time, neuroscientists have thought that this must mean that the way the brain represents these thoughts is through constant activity. This study shows that this isn’t the case — rather, our memories are blinking in and out of existence.&nbsp;Furthermore, each time a memory blinks on, it is riding on top of a wave of activity in the brain,” says Tim Buschman, an assistant professor of psychology at Princeton University who was not involved in the study.</p>
<p>Two other recent papers from Miller’s lab offer additional evidence for beta as a cognitive control mechanism.</p>
<p>In a study that recently appeared in the journal <em>Neuron</em>, they found similar patterns of interaction between beta and gamma rhythms in a different task involving assigning patterns of dots into categories. In cases where two patterns were easy to distinguish, gamma rhythms, carrying visual information, predominated during the identification. If the distinction task was more difficult, beta rhythms, carrying information about past experience with the categories, predominated.</p>
<p>In a <a href="http://news.mit.edu/2018/rhythmic-interactions-cortical-layers-control-working-memory-0115">recent paper</a> published in the <em>Proceedings of the National Academy of Sciences</em>, Miller’s lab found that beta waves are produced by deep layers of the prefrontal cortex, and gamma rhythms are produced by superficial layers, which process sensory information. They also found that the beta waves were controlling the interaction of the two types of rhythms.</p>
<p>“When you find that kind of anatomical segregation and it’s in the infrastructure where you expect it to be, that adds a lot of weight to our hypothesis,” Miller says.</p>
<p>The researchers are now studying whether these types of rhythms control other brain functions such as attention. They also hope to study whether the interaction of beta and gamma rhythms explains why it is so difficult to hold more than a few pieces of information in mind at once.</p>
<p>“Eventually we’d like to see how these rhythms explain the limited capacity of working memory, why we can only hold a few thoughts in mind simultaneously, and what happens when you exceed capacity,” Miller says. “You have to have a mechanism that compensates for the fact that you overload your working memory and make decisions on which things are more important than others.”</p>
<p>The research was funded by the National Institute of Mental Health, the Office of Naval Research, and the Picower JFDP Fellowship.</p>
MIT neuroscientists have found evidence that the brain’s ability to control what it’s thinking about relies on low-frequency brain waves known as beta rhythms.
Research, Brain and cognitive sciences, Picower Institute, School of Science, Memory, NeuroscienceStudy: Distinct brain rhythms and regions help us reason about categorieshttp://news.mit.edu/2018/distinct-brain-rhythms-regions-help-us-reason-about-categories-0125
High-frequency gamma oscillations sort similar-looking objects; lower-frequency beta oscillations kick in when connections are more abstract.Thu, 25 Jan 2018 12:00:00 -0500David Orenstein | Picower Institute for Learning and Memoryhttp://news.mit.edu/2018/distinct-brain-rhythms-regions-help-us-reason-about-categories-0125<p>We categorize pretty much everything we see, and remarkably, we often achieve that feat whether the items look patently similar — such as Fuji and McIntosh apples — or they share a more abstract similarity — such as a screwdriver and a drill. A new study at MIT’s Picower Institute for Learning and Memory explains how.</p>
<p>“Categorization is a fundamental cognitive mechanism,” says Earl Miller, the Picower Professor in MIT’s Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences. “It’s the way the brain learns to generalize. If your brain didn’t have this ability, you’d be overwhelmed by details of the sensory world. Every time you experienced something, if it was in different lighting or at a different angle, your brain would treat it as a brand new thing.”</p>
<p>In the new paper in <em>Neuron</em>, Miller’s lab, led by postdoc Andreas Wutz and graduate student Roman Loonis, shows that the ability to categorize based on straightforward resemblance or on abstract similarity arises from the brain’s use of distinct rhythms, at distinct times, in distinct parts of the prefrontal cortex (PFC). Specifically, when animals needed to match images that bore close resemblance, an increase in the power of high-frequency gamma rhythms in the ventral lateral PFC did the trick. When they had to match images based on a more abstract similarity, that depended on a later surge of lower-frequency beta rhythms in the dorsal lateral PFC.</p>
<p>Miller says those findings suggest a model of how the brain achieves category abstractions. It shows that meeting the challenge of abstraction is not merely a matter of thinking the same way but harder. Instead, a different mechanism in a different part of the brain takes over when simple, sensory comparison is not enough for us to judge whether two things belong to the same category.</p>
<p>By precisely describing the frequencies, locations, and the timing of rhythms that govern categorization, the findings, if replicated in humans, could prove helpful in research to understand an aspect of some autism spectrum disorders (ASD), says Miller. In ASD, categorization can be challenging for patients, especially when objects or faces appear atypical. Potentially, clinicians could measure rhythms to determine whether patients who struggle to recognize abstract similarities are employing the mechanisms differently.</p>
<p><strong>Connecting the dots</strong></p>
<p>To conduct the study, Wutz, Loonis, Miller, and their co-authors measured brain rhythms in key areas of the PFC associated with categorization as animals played some on-screen games. In each round, animals would see a pattern of dots — a sample from one of two different categories of configurations. Then the sample would disappear and after a delay, two choices of dot designs would appear. The subject’s task was to fix its gaze on whichever one belonged to the same category as the sample. Sometimes the right answer was evident by sheer visual resemblance, but sometimes the similarity was based on a more abstract criterion the animal could infer over successive trials. The experimenters precisely quantified the degree of abstraction based on geometric calculations of the distortion of the dot pattern compared to a category archetype.</p>
<p>“This study was very well-defined,” says Wutz. “It provided a mathematically correct way to distinguish something so vague as abstraction. It’s a judgment call very often, but not with the paradigm that we used.”</p>
<p>Gamma in the ventral PFC always peaked in power when the sample appeared, as if the animals were making a “Does this sample look like category A or not?” assessment as soon as they were shown it. Beta power in the dorsal PFC peaked during the subsequent delay period when abstraction was required, as if the animals realized that there wasn’t enough visual resemblance and deeper thought would be necessary to make the upcoming choice.</p>
<p>Notably, the data was rich enough to reveal several nuances about what was going on. Category information and rhythm power were so closely associated, for example, that the researchers measured greater rhythm power in advance of correct category judgments than in advance of incorrect ones. They also found that the role of beta power was not based on the difficulty of choosing a category (i.e., how similar the choices were) but specifically on whether the correct answer had a more abstract or literal similarity to the sample.</p>
<p>By analyzing the rhythm measurements, the researchers could even determine how the animals were approaching the categorization task. They weren’t judging whether a sample belonged to one category or the other, says Wutz. Instead they were judging whether they belonged to a preferred category or not.</p>
<p>“That preference was reflected in the brain rhythms,” says Wutz. “We saw the strongest effects for each animal’s preferred category.”</p>
<p>Tim Buschman, assistant professor in the Princeton Neuroscience Institute and Department of Psychology at Princeton University, says the study helps to explain a crucial aspect of the brain’s ability to generalize: flexibility.</p>
<p>“Once we see one dog bark, we instantly know that all dogs bark. However, there is a right amount to generalize; we don’t want to learn that all four-legged mammals bark,” says Buschman. “The current manuscript provides insight into how the brain flexibly modulates how much we should generalize — a little (all dogs bark) or a lot (all mammals have hair). The study provides new insight into how the brain flexibly switches between two different modes — there is a ‘bottom-up’ mode that is rooted in the more concrete representations of our senses, allowing for a little generalization; and a ‘top-down’ mode that uses higher-order brain regions to generalize more broadly.</p>
<p>“This study is an important first step in understanding how the brain generalizes knowledge and lays the groundwork for understanding cognitive conditions, such as autism, that impair one’s ability to generalize,” says Buschman.&nbsp;</p>
<p>The National Institute of Mental Health funded the study, which was co-authored by graduate student Jacob Donoghue and research scientist Jefferson Roy.</p>
Sometimes items can easily be grouped by their appearance, such as these apples, but sometimes the brain must use a different mode of thought to find a more abstract similarity, such as that between a drill and screwdriver.School of Science, Brain and cognitive sciences, Picower Institute, Neuroscience, Research, AutismUltrathin needle can deliver drugs directly to the brainhttp://news.mit.edu/2018/ultrathin-needle-can-deliver-drugs-directly-brain-0124
Miniaturized system could be used to treat neurological disorders that affect specific brain regions. Wed, 24 Jan 2018 13:59:59 -0500Anne Trafton | MIT News Officehttp://news.mit.edu/2018/ultrathin-needle-can-deliver-drugs-directly-brain-0124<p>MIT researchers have devised a <a href="http://www.media.mit.edu/projects/miniaturized-neural-system-for-chronic-local-intracerebral-drug-delivery/overview/">miniaturized system</a> that can deliver tiny quantities of medicine to brain regions as small as 1 cubic millimeter. This type of targeted dosing could make it possible to treat diseases that affect very specific brain circuits, without interfering with the normal function of the rest of the brain, the researchers say.</p>
<p>Using this device, which consists of several tubes contained within a needle about as thin as a human hair, the researchers can deliver one or more drugs deep within the brain, with very precise control over how much drug is given and where it goes. In a study of rats, they found that they could deliver targeted doses of a drug that affects the animals’ motor function.</p>
<p>“We can infuse very small amounts of multiple drugs compared to what we can do intravenously or orally, and also manipulate behavioral changes through drug infusion,” says Canan Dagdeviren, the LG Electronics Career Development Assistant Professor of Media Arts and Sciences and the lead author of the paper, which appears in the Jan. 24 issue of <em>Science Translational Medicine</em>.</p>
<p>“We believe this tiny microfabricated device could have tremendous impact in understanding brain diseases, as well as providing new ways of delivering biopharmaceuticals and performing biosensing in the brain,” says Robert Langer, the David H. Koch Institute Professor at MIT and one of the paper’s senior authors.</p>
<p>Michael Cima, the David H. Koch Professor of Engineering in the Department of Materials Science and Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research, is also a senior author of the paper.</p>
<p><strong>Targeted action</strong></p>
<p>Drugs used to treat brain disorders often interact with brain chemicals called neurotransmitters or the cell receptors that interact with neurotransmitters. Examples include l-dopa, a dopamine precursor used to treat Parkinson’s disease, and Prozac, used to boost serotonin levels in patients with depression. However, these drugs can have side effects because they act throughout the brain.</p>
<p>“One of the problems with central nervous system drugs is that they’re not specific, and if you’re taking them orally they go everywhere. The only way we can limit the exposure is to just deliver to a cubic millimeter of the brain, and in order to do that, you have to have extremely small cannulas,” Cima says.</p>
<p>The MIT team set out to develop a miniaturized cannula (a thin tube used to deliver medicine) that could target very small areas. Using microfabrication techniques, the researchers constructed tubes with diameters of about 30 micrometers and lengths up to 10 centimeters. These tubes are contained within a stainless steel needle with a diameter of about 150 microns. “The device is very stable and robust, and you can place it anywhere that you are interested,” Dagdeviren says.</p>
<p>The researchers connected the cannulas to small pumps that can be implanted under the skin. Using these pumps, the researchers showed that they could deliver tiny doses (hundreds of nanoliters) into the brains of rats. In one experiment, they delivered a drug called muscimol to a brain region called the substantia nigra, which is located deep within the brain and helps to control movement.</p>
<p>Previous studies have shown that muscimol induces symptoms similar to those seen in Parkinson’s disease. The researchers were able to generate those effects, which include stimulating the rats to continually turn in a clockwise direction, using their miniaturized delivery needle. They also showed that they could halt the Parkinsonian behavior by delivering a dose of saline through a different channel, to wash the drug away.</p>
<p>“Since the device can be customizable, in the future we can have different channels for different chemicals, or for light, to target tumors or neurological disorders such as Parkinson’s disease or Alzheimer’s,” Dagdeviren says.</p>
<p>This device could also make it easier to deliver potential new treatments for behavioral neurological disorders such as addiction or obsessive compulsive disorder, which may be caused by specific disruptions in how different parts of the brain communicate with each other.</p>
<p>“Even if scientists and clinicians can identify a therapeutic molecule to treat neural disorders, there remains the formidable problem of how to delivery the therapy to the right cells — those most affected in the disorder. Because the brain is so structurally complex, new accurate ways to deliver drugs or related therapeutic agents locally are urgently needed,” says Ann Graybiel, an MIT Institute Professor and a member of MIT’s McGovern Institute for Brain Research, who is also an author of the paper.</p>
<p><strong>Measuring drug response</strong></p>
<p>The researchers also showed that they could incorporate an electrode into the tip of the cannula, which can be used to monitor how neurons’ electrical activity changes after drug treatment. They are now working on adapting the device so it can also be used to measure chemical or mechanical changes that occur in the brain following drug treatment.</p>
<p>The cannulas can be fabricated in nearly any length or thickness, making it possible to adapt them for use in brains of different sizes, including the human brain, the researchers say.</p>
<p>“This study provides proof-of-concept experiments, in large animal models, that a small, miniaturized device can be safely implanted in the brain and provide miniaturized control of the electrical activity and function of single neurons or small groups of neurons.&nbsp;The impact of this could be significant in focal diseases of the brain, such as Parkinson’s disease,” says Antonio Chiocca, neurosurgeon-in-chief and chairman of the Department of Neurosurgery at Brigham and Women’s Hospital, who was not involved in the research.</p>
<p>The research was funded by the National Institutes of Health and the National Institute of Biomedical Imaging and Bioengineering.</p>
Dagdeviren (center) developed the device while working as a postdoc in the labs of Graybiel (left), Cima (right), and Langer (far left).
Image: M. Scott BrauerResearch, Brain and cognitive sciences, Materials Science and Engineering, Chemical engineering, McGovern Institute, Koch Institute, Media Lab, School of Science, School of Engineering, School of Architecture and Planning, National Institutes of Health (NIH)Lifting the veil on &quot;valence,&quot; brain study reveals roots of desire and dislikehttp://news.mit.edu/2018/lifting-veil-valence-brain-study-reveals-roots-desire-dislike-0123
Researchers map the amygdala&#039;s distinct but diverse and dynamic neighborhoods where feelings are assigned.Tue, 23 Jan 2018 13:05:01 -0500David Orenstein | Picower Institute for Learning and Memoryhttp://news.mit.edu/2018/lifting-veil-valence-brain-study-reveals-roots-desire-dislike-0123<p>The amygdala is a tiny hub of emotions where in 2016 a team led by MIT neuroscientist Kay Tye found specific populations of neurons that assign good or bad feelings, or “valence,” to experience. Learning to associate pleasure with a tasty food, or aversion to a foul-tasting one, is a primal function and key to survival.</p>
<p>In a new study in <em>Cell Reports</em>, Tye’s team at the Picower Institute for Learning and Memory returns to the amygdala for an unprecedentedly deep dive into its inner workings. Focusing on a particular section called the basolateral amygdala, the researchers show how valence-processing circuitry is organized and how key neurons in those circuits interact with others. What they reveal is a region with distinct but diverse and dynamic neighborhoods where valence is sorted out by both connecting with other brain regions and sparking cross-talk within the basolateral amygdala itself.</p>
<p>“Perturbations of emotional valence processing is at the core of many mental health disorders,” says Tye, associate professor of neuroscience at the Picower Institute&nbsp;of Learning and Memory and the Department of Brain and Cognitive Sciences. “Anxiety and addiction, for example, may be an imbalance or a misassignment of positive or negative valence with different stimuli.”</p>
<p>Despite the importance of valence assignment in both healthy behavior and psychiatric disorders, neuroscientists don’t know how the process really works. The new study therefore sought to expose how the neurons and circuits are laid out and how they interact.</p>
<p><strong>Bitter, sweet</strong></p>
<p>To conduct the study, lead author Anna Beyeler, a former postdoc in Tye’s lab and currently a faculty member at the University of Bordeaux in France, led the group in training mice to associate appealing sucrose drops with one tone and bitter quinine drops with another. They recorded the response of different neurons in the basolateral amygdala when the tones were played to see which ones were associated with the conditioned learned valence of the different tastes. They labeled those key neurons associated with valence encoding and engineered them to become responsive to pulses of light. When the researchers then activated them, they recorded the electrical activity not only of those neurons but also of many of their neighbors to see what influence their activity had in local circuits.</p>
<p>They also found, labeled, and made similar measurements among neurons that became active on the occasion that a mouse actually licked the bitter quinine. With this additional step, they could measure not only the neural activity associated with the learned valence of the bitter taste but also that associated with the innate reaction to the actual experience.</p>
<p>Later in the lab, they used tracing technologies to highlight three different kinds of neurons more fully, visualizing them in distinct colors depending on which other region they projected their tendrilous axons to connect with. Neurons that project to a region called the nucleus accumbens are predominantly associated with positive valence, and those that connect to the central amygdala are mainly associated with negative valence. They found that neurons uniquely activated by the unconditioned experience of actually tasting the quinine tended to project to the ventral hippocampus.</p>
<p>In all, the team mapped over 1,600 neurons.</p>
<p>To observe the three-dimensional configuration of these distinct neuron populations, the researchers turned the surrounding brain tissues clear using a technique called CLARITY, invented by Kwanghun Chung, assistant professor of chemical engineering and neuroscience and a colleague in the Picower Institute.</p>
<p><strong>Neighborhoods without fences</strong></p>
<p>Beyeler, Tye, and their co-authors were able to make several novel observations about the inner workings of the basolateral amygdala’s valence circuitry.</p>
<p>One finding was that the different functional populations of neurons tended to cluster together in neighborhoods, or “hotspots.” For example, picturing the almond-shaped amygdala as standing upright on its fat bottom, the neurons projecting to the central amygdala tended to cluster toward the point at the top and then on the right toward the bottom. Meanwhile the neurons that projected to the nucleus accumbens tended to run down the middle, and the ones that projected to the hippocampus were clustered toward the bottom on the opposite side from the central amygdala projectors.</p>
<p>Despite these trends, the researchers also noted that the neighborhoods were hardly monolithic. Instead, neurons of different types frequently intermingled creating a diversity where the predominant neuron type was never far from at least some representatives of the other types.</p>
<p>Meanwhile, their electrical activity data revealed that the different types exerted different degrees of influence over their neighbors. For example, neurons projecting to the central amygdala, in keeping with their association with negative valence, had a very strong inhibitory effect on neighbors, while nucleus accumbens projectors had a smaller influence that was more balanced between excitation and inhibition.</p>
<p>Tye speculates that the intermingling of neurons of different types, including their propensity to influence each other with their activity, may provide a way for competing circuits to engage in cross-talk.</p>
<p>“Perhaps the intermingling that there is might facilitate the ability of these neurons to influence each other,” says Tye.</p>
<p>Notably, Tye’s research has indicated the projections the different cell types make appear immutable, but the influence those cells have over each other is flexible. The basolateral amygdala may therefore be arranged to both assign valence and negotiate it, for instance in those situations when a mouse spies some desirable cheese, but that mean cat is also nearby.</p>
<p>“This helps us understand how form might give rise to function,” says Tye.</p>
<p>In addition to Beyeler and Tye, the paper’s other authors are Chia-Jung Chang, Margaux Silvestre, Clementine Leveque, Praneeth Namburi, and Craig Wildes.</p>
<p>Several funding sources, including the JPB Foundation, Whitehall Foundation, Klingenstein Foundation, Alfred P. Sloan Foundation, New York Stem Cell Foundation, and the National Institutes of Health provided support for the study.</p>
Various 3-D views of the basolateral amygdala show the arrangement of neurons that project to the nucleus accumbens (green), the hippocampus (blue), or the central amygdala (red).School of Science, Brain and cognitive sciences, Picower Institute, Research, Neuroscience, Mental healthMaking preventive medicine more accessiblehttp://news.mit.edu/2018/student-profile-anjali-misra-0118
MIT senior Anjali Misra is drawn to health care problems that don’t have easy answers.Wed, 17 Jan 2018 23:59:59 -0500Fatima Husain | MIT News Officehttp://news.mit.edu/2018/student-profile-anjali-misra-0118<p>Anjali Misra spends a lot of time attending to those who need help the most. On a typical day, the MIT senior and certified emergency medical technician can be found behind the wheel of the MIT ambulance or leading training sessions in CPR and first aid across campus. So far, Misra has helped facilitate the training of more than 1,000 MIT community members.</p>
<p>“I like the idea of being the kind of person who can help in a crisis,” she says. “I think that getting to do something similar as a career would be ideal.”</p>
<p>Misra, who is majoring in brain and cognitive sciences and minoring in music, has her eyes set on medicine: “I knew for my entire life that I wanted to be a doctor. I was one of those strange kids who from 2 years old had a vision.”</p>
<p>That vision brought Misra from Cedar Rapids, Iowa, to Cambridge, Massachusetts, though she admits that “the idea of trying to pursue medicine at an institute of technology seemed a bit incongruous” at first. She turns to her career role model, physician and author Atul Gawande, to explain how her MIT education has been crucial to her approach to medicine.</p>
<p>“He said that science is a commitment to a systematic way of thinking,” she says. By approaching medicine systematically, Misra hopes to “solve problems that don’t really seem to have answers on the surface.”</p>
<p>She’s referring in large part to problems that span both public health and medicine: preventable diseases, which are exacerbated by disparities in access to health care. During her high school years in Cedar Rapids, Misra witnessed firsthand the inaccessibility of health care in rural communities.</p>
<p>When she volunteered at her local hospital, she began to notice that some of the patients visiting the emergency room suffered from chronic, unchecked medical conditions — and that the visits could have been avoided if the patients had access to regular care. “To me, that is an area for great improvement in health care,” Misra says.</p>
<p>To best equip herself with the tools to tackle preventable disease, Misra, <a href="http://news.mit.edu/2017/anjali-misra-awarded-mitchell-scholarship-1127">a 2018 Mitchell Scholar</a>, will pursue a master’s degree in public health at University College Cork in Ireland before returning to the U.S. to pursue medicine.</p>
<p>“Having a chance to see how a different country addresses similar [public health] problems,” she says, “hopefully sets me up to keep an open mind as I try to pursue all of these things in my own career.”</p>
<p><strong>Serving others</strong></p>
<p>Before she began her freshman year, Misra took part in the Freshman Urban Program (FUP), a weeklong pre-orientation program that focuses on social justice and volunteering. Program topics have spanned hunger, education, race and gender, and sexual identity. “The counselors take the students to community organizations in Boston and Cambridge to do a morning of service, and the idea is that these organizations do work related to the topic of the day,” she says. “Then, in the afternoon, everybody returns to campus and the counselors facilitate a group activity that stimulates conversation on these topics.”</p>
<p>“For me, that was a perfect way to start my experience at MIT,” Misra says. “I think it really set me on a path to considering service as essential and not as something that I was going to incorporate into my life when I had a free week.”</p>
<p>Misra liked the experience so much that she worked as a counselor for the program in each subsequent school year. Her senior year, she helped organize the program as a co-coordinator. “It was always something that I was going to start with and something I intend to end with. And I always wanted to be there in the middle, too,” she says.</p>
<p>With a similar level of commitment, since her freshman year Misra has also been actively involved in SHINE for Girls, a weekly after-school program that combines math tutoring and dance. “The premise of this program is to combine something that a lot of girls enjoy, dance, with something that is challenging in order to increase their confidence and reframe the topic,” she says. Now, Misra is co-president of the mentorship program, and she hopes to help the current mentees find role models who are women in STEM.</p>
<p><strong>Mobile medicine</strong></p>
<p>The summer before her senior year, Misra was named an MIT Priscilla King Gray Public Service Center Fellow. During her fellowship, she worked with the Family Van, a Harvard Medical School program that brings mobile health care to underserved communities. Misra was stationed in East Boston to screen members of the community for preventable diseases.</p>
<p>“What seems extremely strange to some people is the idea of providing health care out of a repurposed bus or a van or a train,” Misra says. “But it is actually doing a lot of good in the community in a way that traditional clinics have been unable to achieve.”</p>
<p>That summer, Misra also worked on research for the Mobile Health Map, a collaborative network that aggregates data on successes and challenges across many different mobile health clinics. Together, these experiences opened her eyes to the public health challenges in mobile health care.</p>
<p>“One thing I struggle with a lot as a scientist and as an idealist is knowing that having the solution is not all it takes to solve the problem,” Misra says. “It wasn’t enough to just drive out into the community and be extremely accessible to people. … There was a second tier of challenges that appeared.” Those challenges included getting community members to come inside the van and take part in the free screenings offered.</p>
<p>By combining an education in public health with one in medicine, Misra hopes to tackle some of those challenges, to maximize the effectiveness of preventative care. Despite some of the difficulties, Misra still sees great potential for mobile medicine.</p>
<p>“Many lives are going to be saved or should be saved in the future by taking advantage of this knowledge that we already have, and just by making it accessible and actually useful to the people who we are hoping to impact,” Misra says.</p>
<p>She hopes to use her undergraduate and graduate education to become an effective physician who helps improve her patients’ lives. “As soon as I become comfortable [as a practicing physician], then I can start to incorporate advocacy from a really early part of my career,” she says. “I don’t want to wait until the very end to do that.”</p>
<p>Misra has served as co-president of the <a href="http://web.mit.edu/saas/www/home.html">MIT South Asian Association of Students</a> after joining as a freshman representative. She also takes part in medical research elucidating the function and role of primary cilia under the supervision of Peter Czarnecki, in the Shah Lab of the Harvard Medical School. Misra was also a presenter for the MIT Women’s Initiative and has presented about careers in STEM to approximately 1,500 female middle school students in Union Country, North Carolina, and she was a recipient of the 2017 McKinsey Undergraduate Women’s Impact Award.</p>
Senior Anjali Misra, who is majoring in brain and cognitive sciences and minoring in music, is originally from Cedar Rapids, Iowa.
Image: Jake BelcherProfile, Students, Undergraduate, Awards, honors and fellowships, School of Science, Brain and cognitive sciences, Health care, Medicine, Music, Neuroscience, Public health, Volunteering, outreach, public service, Women, Women in STEMJosh McDermott receives NAS Troland Research Awardhttp://news.mit.edu/2018/mcdermott-receives-nas-research-award-0117
Cognitive scientist is recognized for groundbreaking research into how humans hear and interpret sound.Wed, 17 Jan 2018 14:15:01 -0500Sara Cody | Department of Brain and Cognitive Scienceshttp://news.mit.edu/2018/mcdermott-receives-nas-research-award-0117<p>Josh McDermott, assistant professor in the Department of Brain and Cognitive Sciences, is a recipient of the Troland Research Award from the National Academy of Sciences (NAS). McDermott, a cognitive scientist, is recognized for his unique engineering approach to study audition; the award cites his “groundbreaking discoveries about how people hear and interpret information from sound in order to make sense of the world around them.”</p>
<p>McDermott’s research goal is to better understand how the brain derives information from sound, by studying how it interprets signals from acoustic sensor arrays (in the ears) to make inferences about the environment. McDermott has pioneered new approaches to studying audition by applying a combination of cognitive principles (learning, memory, attention, etc.), neuronal and behavioral experimental data, and computational models.</p>
<p>His unique experimental approach to study audition has yielded important insights, such as the effects of reverberation on hearing, how humans extract pitch and perceive music across cultures, and a deeper understanding of the structure and function of the human auditory cortex. McDermott’s work has the potential to impact our ability to treat hearing impairment and build machines that can better interpret sound by mirroring human capabilities.</p>
<p>The Troland Research Award is given annually to two young investigators who further empirical research within the broad spectrum of experimental psychology. <a href="http://www.nasonline.org/news-and-multimedia/news/2018-nas-awards-recipients.html">McDermott and 18 other investigators</a> being honored by the NAS in various capacities will receive their awards in a ceremony on April 29 during the academy's 155th annual meeting.&nbsp;</p>
Josh McDermottPhoto courtesy of the Department of Brain and Cognitive SciencesBrain and cognitive sciences, School of Science, Neuroscience, Awards, honors and fellowships, FacultyStudy: Rhythmic interactions between cortical layers underlie working memoryhttp://news.mit.edu/2018/rhythmic-interactions-cortical-layers-control-working-memory-0115
MIT neuroscientists suggest a model for how we gain volitional control of what we hold in our minds.Mon, 15 Jan 2018 15:00:07 -0500David Orenstein | Picower Institute for Learning and Memoryhttp://news.mit.edu/2018/rhythmic-interactions-cortical-layers-control-working-memory-0115<p>Working memory is a sort of “mental sketchpad” that allows you to accomplish everyday tasks such as calling in your hungry family’s takeout order and finding the bathroom you were just told “will be the third door on the right after you walk straight down that hallway and make your first left.” It also allows your mind to go from merely responding to your environment to consciously asserting your agenda.</p>
<p>“Working memory allows you to choose what to pay attention to, choose what you hold in mind, and choose when to make decisions and take action,” says Earl K. Miller, the Picower Professor in MIT’s Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences. “It’s all about wresting control from the environment to your own self. Once you have something like working memory, you go from being a simple creature that’s buffeted by the environment to a creature that can control the environment.”</p>
<p>For years Miller has been curious about how working memory — particularly the volitional control of it — actually works. In a new study in the <em>Proceedings of the National Academy of Sciences</em> led by Picower Institute postdoc Andre Bastos, Miller's lab shows that the underlying mechanism depends on different frequencies of brain rhythms synchronizing neurons in distinct layers of the prefrontal cortex (PFC), the area of the brain associated with higher cognitive function. As animals performed a variety of working memory tasks, higher-frequency gamma rhythms in superficial layers of the PFC were regulated by lower-frequency alpha/beta frequency rhythms in deeper cortical layers.</p>
<p>The findings suggest not only a general model of working memory, and the volition that makes it special, but also new ways that clinicians might investigate conditions such as schizophrenia where working memory function appears compromised.</p>
<p><strong>Layers of waves</strong></p>
<p>To conduct the study, Bastos worked from several lines of evidence and with some relatively new technology. Last year, for example, co-author and Picower Institute postdoc Mikael Lundqvist led a study showing that gamma waves perked up in power when sensory (neuroscientists call it “bottom-up”) information was loaded into and read out from working memory. In previous work, Miller, Bastos, and their colleagues had found that alpha/beta rhythms appeared to carry “top-down” information about goals and plans within the cortex. Top-down information is what we use to make volitional decisions about what to think about or how to act, Miller says.</p>
<p>The current study benefitted from newly improved multilayer electrode brain sensors that few groups have applied in cognitive, rather than sensory, areas of the cortex. Bastos realized that if he made those measurements, he and Miller could determine whether deep alpha/beta and superficial gamma might interact for volitional control of working memory.</p>
<p>In the lab Bastos and his co-authors, including graduate students Roman Loonis and Simon Kornblith, made multilayer measurements in six areas of the PFC as animals performed three different working memory tasks.</p>
<p>In different tasks, animals had to hold a picture in working memory to subsequently choose a picture that matched it. In another type of task, the animals had to remember the screen location of a briefly flashed dot. Overall, the tasks asked the subjects to store, process, and then discard from working memory the appearance or the position of visual stimuli.</p>
<p>“Combining data across the tasks and the areas does lead to additional weight for the evidence,” Bastos says.</p>
<p><strong>A mechanism for working memory</strong></p>
<p>Across all the PFC areas and all tasks, the data showed the same thing: When sensory information was loaded into working memory, the gamma rhythms in superficial layers increased and the alpha/beta rhythms in deep layers that carried the top-down information decreased. Conversely, when deep-layer alpha/beta rhythms increased, superficial layer gamma waned. Subsequent statistical analysis suggested that gamma was being controlled by alpha and beta rhythms, rather than the other way around.</p>
<p>“This suggests mechanisms by which the top-down information needed for volitional control, carried by alpha/beta rhythms, can turn on and off the faucet of bottom-up sensory information, carried by gamma, that reaches working memory and is held in mind,” Miller says.</p>
<p>With these insights, the team has since worked to directly test this multilayer, multifrequency model of working memory dynamics more explicitly, with results in press but not yet published.</p>
<p>Charles Schroeder, research scientist and section head in the Center for Biomedical Imaging and Neuromodulation at the Nathan S. Kline Institute for Psychiatric Research, describes two contributions of the study as empirically important.</p>
<p>“First, the paper clearly shows that critical cognitive operations (in this case working memory) are underlain by periodic (oscillatory) network activity patterns in the brain, and that these must be addressed by single trial analysis,” Schroeder says. “This provides an important conceptual alternative to the idea that working memory must involve continuous neural activation. Secondly, the findings strongly reinforce the notion that dynamic coupling across high- and low-frequency ranges performs a clear mechanistic function: Lower frequency activity dominant in the lower layers of the prefrontal area network controls the temporal patterning of higher frequency information representation in the superficial layers of the same network of areas. The important conceptual innovation in this case lies in allowing lower frequency control operations to act directly on higher frequency information representation within each cortical area.”</p>
<p>Bastos says the model could be useful for generating hypotheses about clinical working memory deficits. Aberrations of deep-layer beta rhythms, for example, could lead to a lessened ability to control working memory for goal-directed action. “In a schizophrenia model or schizophrenia patients, is the interplay between beta and gamma lost?” he asks.</p>
<p>The National Institute of Mental Health and the Office of Naval Research provided funding for the study.</p>
Research, Brain and cognitive sciences, Picower Institute, School of Science, Memory, NeuroscienceHow the brain selectively remembers new placeshttp://news.mit.edu/2017/how-brain-selectively-remembers-new-places-1225
Neuroscientists identify a circuit that helps the brain record memories of new locations.Mon, 25 Dec 2017 14:59:59 -0500Anne Trafton | MIT News Officehttp://news.mit.edu/2017/how-brain-selectively-remembers-new-places-1225<p>When you enter a room, your brain is bombarded with sensory information. If the room is a place you know well, most of this information is already stored in long-term memory. However, if the room is unfamiliar to you, your brain creates a new memory of it almost immediately.</p>
<p>MIT neuroscientists have now discovered how this occurs. A small region of the brainstem, known as the locus coeruleus, is activated in response to novel sensory stimuli, and this activity triggers the release of a flood of dopamine into a certain region of the hippocampus to store a memory of the new location.</p>
<p>“We have the remarkable ability to memorize some specific features of an experience in an entirely new environment, and such ability is crucial for our adaptation to the constantly changing world,” says Susumu Tonegawa, the Picower Professor of Biology and Neuroscience and director of the RIKEN-MIT Center for Neural Circuit Genetics at the Picower Institute for Learning and Memory.</p>
<p>“This study opens an exciting avenue of research into the circuit mechanism by which behaviorally relevant stimuli are specifically encoded into long-term memory, ensuring that important stimuli are stored preferentially over incidental ones,” adds Tonegawa, the senior author of the study.</p>
<p>Akiko Wagatsuma, a former MIT research scientist, is the lead author of the study, which appears in the <em>Proceedings of the National Academy of Sciences</em> the week of Dec. 25.</p>
<p><strong>New places</strong></p>
<p>In a study published about 15 years ago, Tonegawa’s lab found that a part of the hippocampus called the CA3 is responsible for forming memories of novel environments. They hypothesized that the CA3 receives a signal from another part of the brain when a novel place is encountered, stimulating memory formation.</p>
<p>They believed this signal to be carried by chemicals known as neuromodulators, which influence neuronal activity. The CA3 receives neuromodulators from both the locus coeruleus (LC) and a region called the ventral tegmental area (VTA), which is a key part of the brain’s reward circuitry. The researchers decided to focus on the LC because it has been shown to project to the CA3 extensively and to respond to novelty, among many other functions.</p>
<p>The LC responds to an array of sensory input, including visual information as well as sound and odor, then sends information on to other brain areas, including the CA3. To uncover the role of LC-CA3 communication, the researchers genetically engineered mice so that they could block the neuronal activity between those regions by shining light on neurons that form the connection.</p>
<p>To test the mice’s ability to form new memories, the researchers placed the mice in a large open space that they had never seen before. The next day, they placed them in the same space again. Mice whose LC-CA3 connections were not disrupted spent much less time exploring the space on the second day, because the environment was already familiar to them. However, when the researchers interfered with the LC-CA3 connection during the first exposure to the space, the mice explored the area on the second day just as much as they had on the first. This suggests that they were unable to form a memory of the new environment.</p>
<p>The LC appears to exert this effect by releasing the neuromodulator dopamine into the CA3 region, which was surprising because the LC is known to be a major source of norepinephrine to the hippocampus. The researchers believe that this influx of dopamine helps to boost CA3’s ability to strengthen synapses and form a memory of the new location.</p>
<p>They found that this mechanism was not required for other types of memory, such as memories of fearful events, but appears to be specific to memory of new environments. The connections between the LC and CA3 are necessary for long-term spatial memories to form in CA3.</p>
<p>“The selectivity of successful memory formation has long been a puzzle,” says Richard Morris, a professor of neuroscience at the University of Edinburgh, who was not involved in the research. “This study goes a long way toward identifying the brain mechanisms of this process. Activity in the pathway between the locus coeruleus and CA3 occurs most strongly during novelty, and it seems that activity fixes the representations of everyday experience, helping to register and retain what’s been happening and where we’ve been.”</p>
<p><strong>Choosing to remember</strong></p>
<p>This mechanism likely evolved as a way to help animals survive, allowing them to remember new environments without wasting brainpower on recording places that are already familiar, the researchers say.</p>
<p>“When we are exposed to sensory information, we unconsciously choose what to memorize. For an animal’s survival, certain things are necessary to be remembered, and other things, familiar things, probably can be forgotten,” Wagatsuma says.</p>
<p>Still unknown is how the LC recognizes that an environment is new. The researchers hypothesize that some part of the brain is able to compare new environments with stored memories or with expectations of the environment, but more studies are needed to explore how this might happen.</p>
<p>“That’s the next big question,” Tonegawa says. “Hopefully new technology will help to resolve that.”</p>
<p>The research was funded by the RIKEN Brain Science Institute, the Howard Hughes Medical Institute, and the JPB Foundation.</p>
The image shows the locus coeruleus, which drives neuronal circuits of the hippocampus and enables novel contextual memory. The red staining shows norepinephrine transporter (NET)-positive cells, indicating the locus coeruleus. The green staining shows adeno-associated virus (AAV)-mediated expressions of light-sensitive inhibitory opsin, archaerhodopsin (Arch). The blue staining shows all cells in the brain stem.
Image: Akiko Wagatsuma, Tonegawa LabResearch, Brain and cognitive sciences, Picower Institute, School of Science, MemoryRecalculating timehttp://news.mit.edu/2017/mit-researchers-algorithm-enables-statistical-analysis-time-series-data-1221
A novel algorithm enables statistical analysis of time series data.Thu, 21 Dec 2017 17:30:00 -0500Sara Cody | Brain and Cognitive Scienceshttp://news.mit.edu/2017/mit-researchers-algorithm-enables-statistical-analysis-time-series-data-1221<p>Whether it’s tracking brain activity in the operating room, seismic vibrations during an earthquake, or biodiversity in a single ecosystem over a million years, measuring the frequency of an occurrence over a period of time is a fundamental data analysis task that yields critical insight in many scientific fields. But when it comes to analyzing these time series data, researchers are limited to looking at pieces of the data at a time to assemble the big picture, instead of being able to look at the big picture all at once.</p>
<p>In a new study, MIT researchers have developed a novel approach to analyzing time series data sets using a new algorithm, termed state-space multitaper time-frequency analysis (SS-MT). SS-MT provides a framework to analyze time series data in real-time, enabling researchers to work in a more informed way with large sets of data that are nonstationary, i.e. when their characteristics evolve over time. It allows researchers to not only quantify the shifting properties of data but also make formal statistical comparisons between arbitrary segments of the data.</p>
<p>“The algorithm functions similarly to the way a GPS calculates your route when driving. If you stray away from your predicted route, the GPS triggers the recalculation to incorporate the new information,” says Emery Brown, the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience, a member of the Picower Institute for Learning and Memory, associate director of the Institute for Medical Engineering and Science, and senior author on the study.</p>
<p>“This allows you to use what you have already computed to get a more accurate estimate of what you’re about to compute in the next time period,” Brown says. “Current approaches to analyses of long, nonstationary time series ignore what you have already calculated in the previous interval leading to an enormous information loss.”</p>
<p>In the&nbsp;study, Brown and his colleagues combined the strengths of two existing statistical analysis paradigms: state-space modeling and multitaper methods. State-space modeling is a flexible paradigm, which has been broadly applied to analyze data whose characteristics evolve over time. Examples include GPS, tracking learning, and performing speech recognition. Multitaper methods are optimal for computing spectra on a finite interval. When combined, the two methods bring together the local optimality properties of the multitaper approach with the ability to combine information across intervals with the state-space framework to produce an analysis paradigm that provides increased frequency resolution, increased noise reduction and formal statistical inference.</p>
<p>To test the SS-MT algorithm, Brown and colleagues first analyzed electroencephalogram (EEG) recordings measuring brain activity from patients receiving general anesthesia for surgery. The SS-MT algorithm provided a highly denoised spectrogram characterizing the changes in power across frequencies over time. In a second example, they used the SS-MT’s inference paradigm to compare different levels of unconsciousness in terms of the differences in the spectral properties of these behavioral states.</p>
<p>“The SS-MT analysis produces cleaner, sharper spectrograms,” says Brown. “The more background noise we can remove from a spectrogram, the easier it is to carry out formal statistical analyses.”</p>
<p>Going forward, Brown and his team will use this method to investigate in detail the nature of the brain’s dynamics under general anesthesia. He further notes that the algorithm could find broad use in other applications of time-series analyses.</p>
<p>“Spectrogram estimation is a standard analytic technique applied commonly in a number of problems such as analyzing solar variations, seismic activity, stock market activity, neuronal dynamics and many other types of time series,” says Brown. “As use of sensor and recording technologies becomes more prevalent, we will need better, more efficient ways to process data in real time. Therefore, we anticipate that the SS-MT algorithm could find many new areas of application.”</p>
<p>Seong-Eun Kim, Michael K. Behr, and Demba E. Ba are lead authors of the paper, which was published online the week of Dec. 18 in <em>Proceedings of the National Academy of Sciences PLUS</em>. This work was partially supported by a National&nbsp;Research Foundation of Korea Grant, Guggenheim Fellowships in Applied Mathematics, the National Institutes of Health including NIH Transformative Research Awards, funds from Massachusetts General Hospital, and funds from the Picower Institute for Learning and Memory.</p>
Using a novel analytical method they have developed, MIT researchers analyzed raw brain activity data (B). The spectrogram shows decreased noise and increased frequency resolution, or contrast (E and F) compared to standard spectral analysis methods (C and D). Image courtesy of Seong-Eun Kim et al.School of Science, Research, Algorithms, Brain and cognitive sciences, Picower Institute, Institute for Medical Engineering and Science (IMES)Computer systems predict objects’ responses to physical forceshttp://news.mit.edu/2017/computer-systems-predict-objects-responses-physical-forces-1214
Results may help explain how humans do the same thing.Wed, 13 Dec 2017 23:59:59 -0500Larry Hardesty | MIT News Officehttp://news.mit.edu/2017/computer-systems-predict-objects-responses-physical-forces-1214<p>Josh Tenenbaum, a professor of brain and cognitive sciences at MIT, directs research on the development of intelligence at the Center for Brains, Minds, and Machines, a multiuniversity, multidisciplinary project based at MIT that seeks to explain and replicate human intelligence.</p>
<p>Presenting their work at this year’s Conference on Neural Information Processing Systems, Tenenbaum and one of his students, Jiajun Wu, are co-authors on four papers that examine the fundamental cognitive abilities that an intelligent agent requires to navigate the world: discerning distinct objects and inferring how they respond to physical forces.</p>
<p>By building computer systems that begin to approximate these capacities, the researchers believe they can help answer questions about what information-processing resources human beings use at what stages of development. Along the way, the researchers might also generate some insights useful for robotic vision systems.</p>
<p>“The common theme here is really learning to perceive physics,” Tenenbaum says. “That starts with seeing the full 3-D shapes of objects, and multiple objects in a scene, along with their physical properties, like mass and friction, then reasoning about how these objects will move over time. Jiajun’s four papers address this whole space. Taken together, we’re starting to be able to build machines that capture more and more of people’s basic understanding of the physical world.”</p>
<p>Three of the papers deal with inferring information about the physical structure of objects, from both visual and aural data. The fourth deals with predicting how objects will behave on the basis of that data.</p>
<p><strong>Two-way street</strong></p>
<p>Something else that unites all four papers is their unusual approach to machine learning, a technique in which computers learn to perform computational tasks by analyzing huge sets of training data. In a typical machine-learning system, the training data are labeled: Human analysts will have, say, identified the objects in a visual scene or transcribed the words of a spoken sentence. The system attempts to learn what features of the data correlate with what labels, and it’s judged on how well it labels previously unseen data.</p>
<p>In Wu and Tenenbaum’s new papers, the system is trained to infer a physical model of the world — the 3-D shapes of objects that are mostly hidden from view, for instance. But then it works backward, using the model to resynthesize the input data, and its performance is judged on how well the reconstructed data matches the original data.</p>
<p>For instance, using visual images to build a 3-D model of an object in a scene requires stripping away any occluding objects; filtering out confounding visual textures, reflections, and shadows; and inferring the shape of unseen surfaces. Once Wu and Tenenbaum’s system has built such a model, however, it rotates it in space and adds visual textures back in until it can approximate the input data.</p>
<p>Indeed, two of the researchers’ four papers address the complex problem of inferring 3-D models from visual data. On those papers, they’re joined by four other MIT researchers, including William Freeman, the Perkins Professor of Electrical Engineering and Computer Science, and by colleagues at DeepMind, ShanghaiTech University, and Shanghai Jiao Tong University.</p>
<p><strong>Divide and conquer</strong></p>
<p>The researchers’ system is based on the influential theories of the MIT neuroscientist <a href="https://mitpress.mit.edu/books/vision-0">David Marr</a>, who died in 1980 at the tragically young age of 35. Marr hypothesized that in interpreting a visual scene, the brain first creates what he called a 2.5-D sketch of the objects it contained — a representation of just those surfaces of the objects facing the viewer. Then, on the basis of the 2.5-D sketch — not the raw visual information about the scene — the brain infers the full, three-dimensional shapes of the objects.</p>
<p>“Both problems are very hard, but there’s a nice way to disentangle them,” Wu says. “You can do them one at a time, so you don’t have to deal with both of them at the same time, which is even harder.”</p>
<p>Wu and his colleagues’ system needs to be trained on data that include both visual images and 3-D models of the objects the images depict. Constructing accurate 3-D models of the objects depicted in real photographs would be prohibitively time consuming, so initially, the researchers train their system using synthetic data, in which the visual image is generated from the 3-D model, rather than vice versa. The process of creating the data is like that of creating a computer-animated film.</p>
<p>Once the system has been trained on synthetic data, however, it can be fine-tuned using real data. That’s because its ultimate performance criterion is the accuracy with which it reconstructs the input data. It’s still building 3-D models, but they don’t need to be compared to human-constructed models for performance assessment.</p>
<p>In evaluating their system, the researchers used a measure called intersection over union, which is common in the field. On that measure, their system outperforms its predecessors. But a given intersection-over-union score leaves a lot of room for local variation in the smoothness and shape of a 3-D model. So Wu and his colleagues also conducted a qualitative study of the models’ fidelity to the source images. Of the study’s participants, 74 percent preferred the new system’s reconstructions to those of its predecessors.</p>
<p><strong>All that fall</strong></p>
<p>In another of Wu and Tenenbaum’s papers, on which they’re joined again by Freeman and by researchers at MIT, Cambridge University, and ShanghaiTech University, they train a system to analyze audio recordings of an object being dropped, to infer properties such as the object’s shape, its composition, and the height from which it fell. Again, the system is trained to produce an abstract representation of the object, which, in turn, it uses to synthesize the sound the object would make when dropped from a particular height. The system’s performance is judged on the similarity between the synthesized sound and the source sound.</p>
<p>Finally, in their fourth paper, Wu, Tenenbaum, Freeman, and colleagues at DeepMind and Oxford University describe a system that begins to model humans’ intuitive understanding of the physical forces acting on objects in the world. This paper picks up where the previous papers leave off: It assumes that the system has already deduced objects’ 3-D shapes.</p>
<p>Those shapes are simple: balls and cubes. The researchers trained their system to perform two tasks. The first is to estimate the velocities of balls traveling on a billiard table and, on that basis, to predict how they will behave after a collision. The second is to analyze a static image of stacked cubes and determine whether they will fall and, if so, where the cubes will land.</p>
<p>Wu developed a representational language he calls scene XML that can quantitatively characterize the relative positions of objects in a visual scene. The system first learns to describe input data in that language. It then feeds that description to something called a physics engine, which models the physical forces acting on the represented objects. Physics engines are a staple of both computer animation, where they generate the movement of clothing, falling objects, and the like, and of scientific computing, where they’re used for large-scale physical simulations.</p>
<p>After the physics engine has predicted the motions of the balls and boxes, that information is fed to a graphics engine, whose output is, again, compared with the source images. As with the work on visual discrimination, the researchers train their system on synthetic data before refining it with real data.</p>
<p>In tests, the researchers’ system again outperformed its predecessors. In fact, in some of the tests involving billiard balls, it frequently outperformed human observers as well.</p>
<p>"The key insight behind their work is utilizing forward physical tools — a renderer, a simulation engine, trained models, sometimes — to train generative models," says Joseph Lim, an assistant professor of computer science at the University of Southern California. "This simple yet elegant idea combined with recent state-of-the-art deep-learning techniques showed great results on multiple tasks related to interpreting the physical world."</p>
As part of an investigation into the nature of humans' physical intuitions, MIT researchers trained a neural network to predict how unstably stacked blocks would respond to the force of gravity.
Image: Christine Daniloff/MITResearch, Center for Brains Minds and Machines, School of Engineering, School of Science, Artificial intelligence, Brain and cognitive sciences, Computer modeling, Computer Science and Artificial Intelligence Laboratory (CSAIL), Computer science and technology, Electrical Engineering & Computer Science (eecs), Machine learningHow the brain keeps timehttp://news.mit.edu/2017/networks-neurons-stretch-compress-control-timing-1204
Neuroscientists discover networks of neurons that stretch or compress their activity to control timing. Mon, 04 Dec 2017 10:59:59 -0500Anne Trafton | MIT News Officehttp://news.mit.edu/2017/networks-neurons-stretch-compress-control-timing-1204<p>Timing is critical for playing a musical instrument, swinging a baseball bat, and many other activities. Neuroscientists have come up with several models of how the brain achieves its exquisite control over timing, the most prominent being that there is a centralized clock, or pacemaker, somewhere in the brain that keeps time for the entire brain.</p>
<p>However, a new study from MIT researchers provides evidence for an alternative timekeeping system that relies on the neurons responsible for producing a specific action. Depending on the time interval required, these neurons compress or stretch out the steps they take to generate the behavior at a specific time.</p>
<p>“What we found is that it’s a very active process. The brain is not passively waiting for a clock to reach a particular point,” says Mehrdad Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.</p>
<p>MIT postdoc Jing Wang and former postdoc Devika Narain are the lead authors of the paper, which appears in the Dec. 4 issue of <em>Nature Neuroscience</em>. Graduate student Eghbal Hosseini is also an author of the paper.</p>
<p><strong>Flexible control</strong></p>
<p>One of the earliest models of timing control, known as the clock accumulator model, suggested that the brain has an internal clock or pacemaker that keeps time for the rest of the brain. A later variation of this model suggested that instead of using a central pacemaker, the brain measures time by tracking the synchronization between different brain wave frequencies.</p>
<p>Although these clock models are intuitively appealing, Jazayeri says, “they don’t match well with what the brain does.”</p>
<p>No one has found evidence for a centralized clock, and Jazayeri and others wondered if parts of the brain that control behaviors that require precise timing might perform the timing function themselves. “People now question why would the brain want to spend the time and energy to generate a clock when it’s not always needed. For certain behaviors you need to do timing, so perhaps the parts of the brain that subserve these functions can also do timing,” he says.</p>
<p>To explore this possibility, the researchers recorded neuron activity from three brain regions in animals as they performed a task at two different time intervals — 850 milliseconds or 1,500 milliseconds.</p>
<p>The researchers found a complicated pattern of neural activity during these intervals. Some neurons fired faster, some fired slower, and some that had been oscillating began to oscillate faster or slower. However, the researchers’ key discovery was that no matter the neurons’ response, the rate at which they adjusted their activity depended on the time interval required.</p>
<p>At any point in time, a collection of neurons is in a particular “neural state,” which changes over time as each individual neuron alters its activity in a different way. To execute a particular behavior, the entire system must reach a defined end state. The researchers found that the neurons always traveled the same trajectory from their initial state to this end state, no matter the interval. The only thing that changed was the rate at which the neurons traveled this trajectory.</p>
<p>When the interval required was longer, this trajectory was “stretched,” meaning the neurons took more time to evolve to the final state. When the interval was shorter, the trajectory was compressed.</p>
<p>“What we found is that the brain doesn’t change the trajectory when the interval changes, it just changes the speed with which it goes from the initial internal state to the final state,” Jazayeri says.</p>
<p>Dean Buonomano, a professor of behavioral neuroscience at the University of California at Los Angeles, says that the study “provides beautiful evidence that timing is a distributed process in the brain — that is, there is no single master clock.”</p>
<p>“This work also supports the notion that the brain does not tell time using a clock-like mechanism, but rather relies on the dynamics inherent to neural circuits, and that as these dynamics increase and decrease in speed, animals move more quickly or slowly,” adds Buonomano, who was not involved in the research.</p>
<p><strong>Neural networks</strong></p>
<p>The researchers focused their study on a brain loop that connects three regions: the dorsomedial frontal cortex, the caudate, and the thalamus. They found this distinctive neural pattern in the dorsomedial frontal cortex, which is involved in many cognitive processes, and the caudate, which is involved in motor control, inhibition, and some types of learning. However, in the thalamus, which relays motor and sensory signals, they found a different pattern: Instead of altering the speed of their trajectory, many of the neurons simply increased or decreased their firing rate, depending on the interval required.</p>
<p>Jazayeri says this finding is consistent with the possibility that the thalamus is instructing the cortex on how to adjust its activity to generate a certain interval.</p>
<p>The researchers also created a computer model to help them further understand this phenomenon. They began with a model of hundreds of neurons connected together in random ways, and then trained it to perform the same interval-producing task they had used to train animals, offering no guidance on how the model should perform the task.</p>
<p>They found that these neural networks ended up using the same strategy that they observed in the animal brain data. A key discovery was that this strategy only works if some of the neurons have nonlinear activity — that is, the strength of their output doesn’t constantly increase as their input increases. Instead, as they receive more input, their output increases at a slower rate.</p>
<p>Jazayeri now hopes to explore further how the brain generates the neural patterns seen during varying time intervals, and also how our expectations influence our ability to produce different intervals.</p>
<p>The research was funded by the Rubicon Grant from the Netherlands Scientific Organization, the National Institutes of Health, the Sloan Foundation, the Klingenstein Foundation, the Simons Foundation, the Center for Sensorimotor Neural Engineering, and the McGovern Institute.</p>
A new study from MIT researchers provides evidence for an alternative timekeeping system that relies on the neurons responsible for producing a specific action. Depending on the time interval required, these neurons compress or stretch out the steps they take to generate the behavior at a specific time.
Image: Christine Daniloff/MITResearch, Brain and cognitive sciences, McGovern Institute, School of Science, NeuroscienceThree MIT seniors awarded 2018 Marshall Scholarshipshttp://news.mit.edu/2017/three-mit-seniors-awarded-marshall-scholarships-1204
Nick Schwartz, Olivia Zhao, and Liang Zhou will pursue graduate studies in the United Kingdom.Mon, 04 Dec 2017 00:00:00 -0500Julia Mongo | MIT News Officehttp://news.mit.edu/2017/three-mit-seniors-awarded-marshall-scholarships-1204<p>Three MIT students — Nick Schwartz, Olivia Zhao, and Liang Zhou — have been named winners of the prestigious Marshall Scholarship. Funded by the British government, the Marshall Scholarship program supports one or two years of graduate study in any field at a U.K. institution. This year, Marshall Scholarships were awarded to 43 top American students</p>
<p>This year’s awards continue MIT’s strong showing; last year, MIT had four Marshall Scholar winners, the largest number of any university. Scholars are selected on the basis of academic merit, leadership, and ambassadorial potential to strengthen U.S.-U.K. understanding. Established in 1953, the Marshall Scholarship is named in honor of General George C. Marshall and celebrates the legacy of the Marshall Plan.</p>
<p>The student scholars were supported by MIT’s Office of Distinguished Fellowships and the Presidential Committee on Distinguished Fellowships. “Nick, Olivia, and Liang have put in heroic efforts this semester and throughout their time at MIT,” says Professor Rebecca Saxe, who co-chairs the committee along with Professor Will Broadhead. “We are all so proud of their accomplishments. It’s one of the best parts of our jobs, getting to play a small part in helping these students realize their visions.”&nbsp;</p>
<p><strong>Nick Schwartz</strong></p>
<p>Nick Schwartz, from Germantown, Tennessee, is an MIT senior majoring in mechanical engineering. Schwartz aims to combine his passion for complex engineering and humanitarian goals to become a leader in nuclear fusion technology, which he considers a transformative and cleaner renewable method for addressing the world’s energy crisis. As a Marshall Scholar, Schwartz plans to pursue a master’s degree in physics with extended research at Imperial College London.</p>
<p>At MIT, Schwartz has sought to engage in work that can improve people’s lives through engineering. At MIT’s D-Lab, he collaborated on the design of a pediatric transtibial prosthetic liner for use in developing countries, and traveled to Kenya and Ethiopia to field-test the product. He has also worked with the Practical Education Network in Ghana, enhancing STEM curricula with video tutorials.&nbsp;Schwartz has interned at Intuitive Surgical, where he worked on improving a surgical robot vision system, and at the CEiiA Center for Innovation and Creative Engineering, where he helped enhance drone technology. This past summer, he was a mechanical design intern at Tri Alpha Energy, a company that seeks to build a nuclear fusion reactor machine. Schwartz helped manufacture the pod levitation system for the MIT Hyperloop team, which finished among the final three teams in the 2017 international competition sponsored by Space X.</p>
<p>Schwartz is also dedicated to child advocacy. For the past two years, he has been an active executive member of Camp Kesem MIT and as coordinator of development has raised over $320,000 to fund the camp experience for children whose parents have been affected by cancer. Schwartz has also served as a program mentor for the MIT-student-led Leadership Training Institute, where he mentored local underserved high school students and advised them on service projects during a 12-week spring semester program. He is currently a mentor at MakerWorks, a student-run machine shop, where he guides other students in designing and building their projects.&nbsp;</p>
<p><strong>Olivia Zhao</strong></p>
<p>Olivia Zhao, from Overland Park, Kansas, is an MIT senior majoring in economics with minors in public policy and mathematics. As a Marshall Scholar, Zhao will pursue a Master of Philosophy in economics at Oxford before returning to the U.S. to earn her doctorate.</p>
<p>As a freshman, Zhao conducted independent research on the influence of public opinion, partisan control, and election results on state- and municipal-level policy with Associate Professor Christopher Warshaw in the Department of Political Science. She went on to do research with Jonathan Gruber, the Ford Professor of Economics, on a variety of topics, including health care in India, tobacco use in New York state, workers’ compensation laws, and long-term care for the elderly. Zhao was named a Burchard Scholar for excellence in her social science studies, and she serves as an officer of the Undergraduate Economics Association, where she promotes learning about economics and the social sciences.</p>
<p>Zhao has analyzed polling data as a summer intern with the Washington political consulting firm Greenberg Quinlan Rosner Research. A lifelong Kansas City Royals fan, Zhao had the opportunity to combine her love for baseball and economics this past summer at the Federal Reserve Bank of Chicago, where, as an economic research intern, she conducted modeling on the financial incentives for minor league baseball players to use performance-enhancing drugs.</p>
<p>Outside of her economics work, Zhao has been an active volunteer for numerous causes. She currently serves as the president for the MIT Academic Teaching Initiative, a program that provides affordable SAT tutoring for low-income Boston area high school students. Through the Priscilla King Gray (PKG) Public Service Center at MIT, she volunteered as a teaching assistant for Teach for America in Milwaukee. Selected by the PKG Center as a team leader for the MIT Alternative Spring Break program, Zhao developed training sessions for volunteers at Cambridge’s Margaret Fuller Neighborhood House to discuss issues related to privilege, food insecurity, and racial inequality. She continues to collaborate with the PKG Center on developing other campus-community partnerships.</p>
<p><strong>Liang Zhou</strong></p>
<p>Liang Zhou, from Riverside, California, is an MIT senior pursuing a double major in brain and cognitive sciences and electrical engineering and computer science. Zhou hopes to contribute to a better understanding of human thought and decision-making by applying computational techniques to cognitive neuroscience research. Seeking to gain deeper insight into the social and philosophical contexts of neural thought, he will pursue a master’s in computational neuroscience at University College London. He hopes that his research will eventually help drive public policy grounded on more complete and accurate models of human behavior.</p>
<p>Zhou arrived at MIT with a multifaceted science background, having presented at the Intel International Science and Engineering Fair and placing in the top 20 nationwide in the USA Physics Olympiad. At MIT, he is an active member of Professor Josh Tenenbaum’s Computational Cognitive Science lab, conducting research on the mechanisms by which people make intuitive counterfactual judgments of structural stability and physical dynamics. He has also worked with Professor Haim Sompolinsky at Harvard University on a biologically faithful model of Hebbian neural networks. In the past, Zhou interned at Apple on the Siri Speech Engineering team, and his project Harmony (a collaborative music score editor for composers) won the 6.148 MIT Web Programming Competition his freshman year.</p>
<p>As a junior, Zhou served as a chair for EECScon, a conference which provides a venue for advanced undergraduates at MIT to present their novel research. He is also a teaching assistant for MIT’s graduate and undergraduate machine learning courses, which are among the largest classes on campus. Outside the classroom, he promotes health and wellness in his community as a MedLink liaison and was a board member and counselor for dynaMIT, a summer program which introduces STEM fields to low-income middle school students in the Boston area. In his free time, Zhou enjoys running and dancing.</p>
Marshall Scholars from left to right: Olivia Zhao, Nick Schwartz, Liang Zhou
Images: Ian MacLellanStudents, Undergraduate, Awards, honors and fellowships, Education, teaching, academics, Student life, Volunteering, outreach, public service, Mechanical engineering, Economics, Brain and cognitive sciences, Electrical Engineering & Computer Science (eecs), School of Engineering, SHASS, School of ScienceHow badly do you want something? Babies can tellhttp://news.mit.edu/2017/babies-can-tell-value-goal-1123
Ten-month-old infants determine the value of a goal from how hard someone works to achieve it.Thu, 23 Nov 2017 14:00:10 -0500Anne Trafton | MIT News Officehttp://news.mit.edu/2017/babies-can-tell-value-goal-1123<p>Babies as young as 10 months can assess how much someone values a particular goal by observing how hard they are willing to work to achieve it, according to a new study from MIT and Harvard University.</p>
<p>This ability requires integrating information about both the costs of obtaining a goal and the benefit gained by the person seeking it, suggesting that babies acquire very early an intuition about how people make decisions.</p>
<p>“Infants are far from experiencing the world as a ‘blooming, buzzing confusion,’” says lead author Shari Liu, referring to a description by philosopher and psychologist William James about a baby’s first experience of the world. “They interpret people's actions in terms of hidden variables, including the effort [people] expend in producing those actions, and also the value of the goals those actions achieve."</p>
<p>“This study is an important step in trying to understand the roots of common-sense understanding of other people’s actions. It shows quite strikingly that in some sense, the basic math that is at the heart of how economists think about rational choice is very intuitive to babies who don’t know math, don’t speak, and can barely understand a few words,” says Josh Tenenbaum, a professor in MIT’s Department of Brain and Cognitive Sciences, a core member of the joint MIT-Harvard Center for Brains, Minds and Machines (CBMM), and one of the paper’s authors.</p>
<p>Tenenbaum helped to direct the research team along with Elizabeth Spelke, a professor of psychology at Harvard University and CBMM core member, in whose lab the research was conducted. Liu, the paper’s lead author, is a graduate student at Harvard. CBMM postdoc Tomer Ullman is also an author of the paper, which appears in the Nov. 23 online edition of <em>Science</em>.</p>
<p><img alt="" src="/sites/mit.edu.newsoffice/files/MIT-Babies-intuition.gif" style="width: 595px; height: 335px;" /></p>
<p><span style="font-size:10px;"><em>To evaluate infants’ intuition regarding what other people value, researchers showed them videos in which an agent (red bouncing ball) decides whether it’s worth the effort to leap over an obstacle to reach a goal (blue cartoon character). (Courtesy of the researchers)</em></span></p>
<p><strong>Calculating value</strong></p>
<p>Previous research has shown that adults and older children can infer someone’s motivations by observing how much effort that person exerts toward obtaining a goal.</p>
<p>The Harvard/MIT team wanted to learn more about how and when this ability develops. Babies expect people to be consistent in their preferences and to be efficient in how they achieve their goals, previous studies have found. The question posed in this study was whether babies can combine what they know about a person’s goal and the effort required to obtain it, to calculate the value of that goal.</p>
<p>To answer that question, the researchers showed 10-month-old infants animated videos in which an “agent,” a cartoon character shaped like a bouncing ball, tries to reach a certain goal (another cartoon character). In one of the videos, the agent has to leap over walls of varying height to reach the goal. First, the babies saw the agent jump over a low wall and then refuse to jump over a medium-height wall. Next, the agent jumped over the medium-height wall to reach a different goal, but refused to jump over a high wall to reach that goal.</p>
<p>The babies were then shown a scene in which the agent could choose between the two goals, with no obstacles in the way. An adult or older child would assume the agent would choose the second goal, because the agent had worked harder to reach that goal in the video seen earlier. The researchers found that 10-month-olds also reached this conclusion: When the agent was shown choosing the first goal, infants looked at the scene longer, indicating that they were surprised by that outcome. (Length of looking time is commonly used to measure surprise in studies of infants.)</p>
<p>The researchers found the same results when babies watched the agents perform the same set of actions with two different types of effort: climbing ramps of varying incline and jumping across gaps of varying width.</p>
<p>“Across our experiments, we found that babies looked longer when the agent chose the thing it had exerted less effort for, showing that they infer the amount of value that agents place on goals from the amount of effort that they take toward these goals,” Liu says.</p>
<p>The findings suggest that infants are able to calculate how much another person values something based on how much effort they put into getting it.</p>
<p>“This paper is not the first to suggest that idea, but its novelty is that it shows this is true in much younger babies than anyone has seen. These are preverbal babies, who themselves are not actively doing very much, yet they appear to understand other people’s actions in this sophisticated, quantitative way,” says Tenenbaum, who is also affiliated with MIT’s Computer Science and Artificial Intelligence Laboratory.</p>
<p>Studies of infants can reveal deep commonalities in the ways that we think throughout our lives, suggests Spelke. “Abstract, interrelated concepts like cost and value — concepts at the center both of our intuitive psychology and of utility theory in philosophy and economics — may originate in an early-emerging system by which infants understand other people's actions,” she says.&nbsp;</p>
<p>The study shows, for the first time, that “preverbal infants can look at the world like economists,” says Gergely Csibra, a professor of cognitive science at Central European University in Hungary. “They do not simply calculate the costs and benefits of others’ actions (this had been demonstrated before), but relate these terms onto each other. In other words, they apply the well-known logic that all of us rely on when we try to assess someone’s preferences: The harder she tries to achieve something, the more valuable is the expected reward to her when she succeeds.”</p>
<p><strong>Modeling intelligence</strong></p>
<p>Over the past 10 years, scientists have developed computer models that come close to replicating how adults and older children incorporate different types of input to infer other people’s goals, intentions, and beliefs. For this study, the researchers built on that work, especially work by Julian Jara-Ettinger PhD ’16, who studied similar questions in preschool-age children. The researchers developed a computer model that can predict what 10-month-old babies would infer about an agent’s goals after observing the agent’s actions. This new model also posits an ability to calculate “work” (or total force applied over a distance) as a measure of the cost of actions, which the researchers believe babies are able to do on some intuitive level.</p>
<p>“Babies of this age seem to understand basic ideas of Newtonian mechanics, before they can talk and before they can count,” Tenenbaum says. “They’re putting together an understanding of forces, including things like gravity, and they also have some understanding of the usefulness of a goal to another person.”</p>
<p>Building this type of model is an important step toward developing artificial intelligence that replicates human behavior more accurately, the researchers say.</p>
<p>“We have to recognize that we’re very far from building AI systems that have anything like the common sense even of a 10-month-old,” Tenenbaum says. “But if we can understand in engineering terms the intuitive theories that even these young infants seem to have, that hopefully would be the basis for building machines that have more human-like intelligence.”</p>
<p>Still unanswered are the questions of exactly how and when these intuitive abilities arise in babies.</p>
<p>“Do infants start with a completely blank slate, and somehow they’re able to build up this sophisticated machinery? Or do they start with some rudimentary understanding of goals and beliefs, and then build up the sophisticated machinery? Or is it all just built in?” Ullman says.</p>
<p>The researchers hope that studies of even younger babies, perhaps as young as 3 months old, and computational models of learning intuitive theories that the team is also developing, may help to shed light on these questions.</p>
<p>This project was funded by the National Science Foundation through the Center for Brains, Minds, and Machines, which is based at MIT’s McGovern Institute for Brain Research and led by MIT and Harvard.</p>
To evaluate infants’ intuition regarding what other people value, researchers showed them videos in which an agent (red bouncing ball) decides whether it’s worth the effort to leap over an obstacle to reach a goal (blue cartoon character).
Courtesy of the researchersResearch, Brain and cognitive sciences, Learning, Center for Brains Minds and Machines, McGovern Institute, Computer Science and Artificial Intelligence Laboratory (CSAIL), School of Science, School of Engineering, National Science Foundation (NSF)Stress can lead to risky decisionshttp://news.mit.edu/2017/stress-can-lead-risky-decisions-1116
Neuroscientists find chronic stress skews decisions toward higher-risk options.Thu, 16 Nov 2017 11:59:59 -0500Anne Trafton | MIT News Officehttp://news.mit.edu/2017/stress-can-lead-risky-decisions-1116<p>Making decisions is not always easy, especially when choosing between two options that have both positive and negative elements, such as deciding between a job with a high salary but long hours, and a lower-paying job that allows for more leisure time.</p>
<p>MIT neuroscientists have now discovered that making decisions in this type of situation, known as a cost-benefit conflict, is dramatically affected by chronic stress. In a study of mice, they found that stressed animals were far likelier to choose high-risk, high-payoff options.</p>
<p>The researchers also found that impairments of a specific brain circuit underlie this abnormal decision making, and they showed that they could restore normal behavior by manipulating this circuit. If a method for tuning this circuit in humans were developed, it could help patients with disorders such as depression, addiction, and anxiety, which often feature poor decision-making.</p>
<p>“One exciting thing is that by doing this very basic science, we found a microcircuit of neurons in the striatum that we could manipulate to reverse the effects of stress on this type of decision making. This to us is extremely promising, but we are aware that so far these experiments are in rats and mice,” says Ann Graybiel, an Institute Professor at MIT and member of the McGovern Institute for Brain Research.</p>
<p>Graybiel is the senior author of the paper, which appears in <em>Cell</em> on Nov. 16. The paper’s lead author is Alexander Friedman, a McGovern Institute research scientist.</p>
<p><strong>Hard decisions</strong></p>
<p>In 2015, Graybiel, Friedman, and their colleagues first identified the brain circuit involved in decision making that involves cost-benefit conflict. The circuit begins in the medial prefrontal cortex, which is responsible for mood control, and extends into clusters of neurons called striosomes, which are located in the striatum, a region associated with habit formation, motivation, and reward reinforcement.</p>
<p>In that study, the researchers trained rodents to run a maze in which they had to choose between one option that included highly concentrated chocolate milk, which they like, along with bright light, which they don’t like, and an option with dimmer light but weaker chocolate milk. By inhibiting the connection between cortical neurons and striosomes, using a technique known as optogenetics, they found that they could transform the rodents’ preference for lower-risk, lower-payoff choices to a preference for bigger payoffs despite their bigger costs.</p>
<p>In the new study, the researchers performed a similar experiment without optogenetic manipulations. Instead, they exposed the rodents to a short period of stress every day for two weeks.</p>
<p>Before experiencing stress, normal rats and mice would choose to run toward the maze arm with dimmer light and weaker chocolate milk about half the time. The researchers gradually increased the concentration of chocolate milk found in the dimmer side, and as they did so, the animals began choosing that side more frequently.</p>
<p>However, when chronically stressed rats and mice were put in the same situation, they continued to choose the bright light/better chocolate milk side even as the chocolate milk concentration greatly increased on the dimmer side. This was the same behavior the researchers saw in rodents that had the prefrontal cortex-striosome circuit disrupted optogenetically.</p>
<p>“The result is that the animal ignores the high cost and chooses the high reward,” Friedman says.</p>
<p>The findings help to explain how stress contributes to substance abuse and may worsen mental disorders, says Amy Arnsten, a professor of neuroscience and psychology at the Yale University School of Medicine, who was not involved in the research.</p>
<p>“Stress is ubiquitous, for both humans and animals, and its effects on brain and behavior are of central importance to the understanding of both normal function and neuropsychiatric disease. It is both pernicious and ironic that chronic stress can lead to impulsive action; in many clinical cases, such as drug addiction, impulsivity is likely to worsen patterns of behavior that produce the stress in the first place, inducing a vicious cycle,” Arnsten wrote in a commentary accompanying the <em>Cell</em> paper, co-authored by Daeyeol Lee and Christopher Pittenger of the Yale University School of Medicine.</p>
<p><strong>Circuit dynamics</strong></p>
<p>The researchers believe that this circuit integrates information about the good and bad aspects of possible choices, helping the brain to produce a decision. Normally, when the circuit is turned on, neurons of the prefrontal cortex activate certain neurons called high-firing interneurons, which then suppress striosome activity.</p>
<p>When the animals are stressed, these circuit dynamics shift and the cortical neurons fire too late to inhibit the striosomes, which then become overexcited. This results in abnormal decision making.</p>
<p>“Somehow this prior exposure to chronic stress controls the integration of good and bad,” Graybiel says. “It’s as though the animals had lost their ability to balance excitation and inhibition in order to settle on reasonable behavior.”</p>
<p>Once this shift occurs, it remains in effect for months, the researchers found. However, they were able to restore normal decision making in the stressed mice by using optogenetics to stimulate the high-firing interneurons, thereby suppressing the striosomes. This suggests that the prefronto-striosome circuit remains intact following chronic stress and could potentially be susceptible to manipulations that would restore normal behavior in human patients whose disorders lead to abnormal decision making.</p>
<p>“This state change could be reversible, and it’s possible in the future that you could target these interneurons and restore the excitation-inhibition balance,” Friedman says.</p>
<p>The research was funded by the National Institutes of Health/National Institute for Mental Health, the CHDI Foundation, the Defense Advanced Research Projects Agency and the U.S. Army Research Office, the Bachmann-Strauss Dystonia and Parkinson Foundation, the William N. and Bernice E. Bumpus Foundation, Michael Stiefel, the Saks Kavanaugh Foundation, and John Wasserlein and Lucille Braun.</p>
MIT neuroscientists have discovered that making decisions, especially when choosing between two options that have both positive and negative elements, can be dramatically affected by chronic stress.
Illustration: Christine Daniloff/MITResearch, Behavior, Mental health, Optogenetics, Brain and cognitive sciences, McGovern Institute, School of Science, Neuroscience, National Institutes of Health (NIH), Defense Advanced Research Projects Agency (DARPA)Next-generation optogenetic molecules control single neuronshttp://news.mit.edu/2017/next-generation-optogenetic-molecules-control-single-neurons-1113
Focused laser beam could help scientists map connections among neurons that underlie behavior.Mon, 13 Nov 2017 15:30:00 -0500Anne Trafton | MIT News Officehttp://news.mit.edu/2017/next-generation-optogenetic-molecules-control-single-neurons-1113<p>Researchers at MIT and Paris Descartes University have developed a new optogenetic technique that sculpts light to target individual cells bearing engineered light-sensitive molecules, so that individual neurons can be precisely stimulated.</p>
<p>Until now, it has been challenging to use optogenetics to target single cells with such precise control over both the timing and location of the activation. This new advance paves the way for studies of how individual cells, and connections among those cells, generate specific behaviors such as initiating a movement or learning a new skill.</p>
<p>“Ideally what you would like to do is play the brain like a piano. You would want to control neurons independently, rather than having them all march in lockstep the way traditional optogenetics works, but which normally the brain doesn’t do,” says Ed Boyden, an associate professor of brain and cognitive sciences and biological engineering at MIT, and a member of MIT’s Media Lab and McGovern Institute for Brain Research.</p>
<p>The new technique relies on a new type of light-sensitive protein that can be embedded in neuron cell bodies, combined with holographic light-shaping that can focus light on a single cell.</p>
<p>Boyden and Valentina Emiliani, a research director at France’s National Center for Scientific Research (CNRS) and director of the Neurophotonics Laboratory at Paris Descartes University, are the senior authors of the study, which appears in the Nov. 13 issue of <em>Nature Neuroscience</em>. The lead authors are MIT postdoc Or Shemesh and CNRS postdocs Dimitrii Tanese and Valeria Zampini.</p>
<p><strong>Precise control</strong></p>
<p>More than 10 years ago, Boyden and his collaborators first pioneered the use of light-sensitive proteins known as microbial opsins to manipulate neuron electrical activity. These opsins can be embedded into the membranes of neurons, and when they are exposed to certain wavelengths of light, they silence or stimulate the cells.</p>
<p>Over the past decade, scientists have used this technique to study how populations of neurons behave during brain tasks such as memory recall or habit formation. Traditionally, many cells are targeted simultaneously because the light shining into the brain strikes a relatively large area. However, as Boyden points out, neurons may have different functions even when they are near each other.</p>
<p>“Two adjacent cells can have completely different neural codes. They can do completely different things, respond to different stimuli, and play different activity patterns during different tasks,” he says.</p>
<p>To achieve independent control of single cells, the researchers combined two new advances: a localized, more powerful opsin and an optimized holographic light-shaping microscope.</p>
<p>For the opsin, the researchers used a protein called CoChR, which the Boyden lab discovered in 2014. They chose this molecule because it generates a very strong electric current in response to light (about 10 times stronger than that produced by channelrhodopsin-2, the first protein used for optogenetics).</p>
<p>They fused CoChR to a small protein that directs the opsin into the cell bodies of neurons and away from axons and dendrites, which extend from the neuron body. This helps to prevent crosstalk between neurons, since light that activates one neuron can also strike axons and dendrites of other neurons that intertwine with the target neuron.</p>
<p>Boyden then worked with Emiliani to combine this approach with a light-stimulation technique that she had previously developed, known as two-photon computer-generated holography (CGH). This can be used to create three-dimensional sculptures of light that envelop a target cell.</p>
<p>Traditional holography is based on reproducing, with light, the shape of a specific object, in the absence of that original object. This is achieved by creating an “interferogram” that contains the information needed to reconstruct an object that was previously illuminated by a reference beam. In computer generated holography, the interferogram is calculated by a computer without the need of any original object. Years ago, Emiliani’s research group demonstrated that combined with two-photon excitation, CGH can be used to refocus laser light to precisely illuminate a cell or a defined group of cells in the brain.</p>
<p>In the new study, by combining this approach with new opsins that cluster in the cell body, the researchers showed they could stimulate individual neurons with not only precise spatial control but also great control over the timing of the stimulation. When they target a specific neuron, it responds consistently every time, with variability that is less than one millisecond, even when the cell is stimulated many times in a row.</p>
<p>“For the first time ever, we can bring the precision of single-cell control toward the natural timescales of neural computation,” Boyden says.</p>
<p><strong>Mapping connections</strong></p>
<p>Using this technique, the researchers were able to stimulate single neurons in brain slices and then measure the responses from cells that are connected to that cell. This paves the way for possible diagramming of the connections of the brain, and analyzing how those connections change in real time as the brain performs a task or learns a new skill.</p>
<p>One possible experiment, Boyden says, would be to stimulate neurons connected to each other to try to figure out if one is controlling the others or if they are all receiving input from a far-off controller.</p>
<p>“It’s an open question,” he says. “Is a given function being driven from afar, or is there a local circuit that governs the dynamics and spells out the exact chain of command within a circuit? If you can catch that chain of command in action and then use this technology to prove that that’s actually a causal link of events, that could help you explain how a sensation, or movement, or decision occurs.”</p>
<p>As a step toward that type of study, the researchers now plan to extend this approach into living animals. They are also working on improving their targeting molecules and developing high-current opsins that can silence neuron activity.</p>
<p>Kirill Volynski, a professor at the Institute of Neurology at University College London, who was not involved in the research, plans to use the new technology in his studies of diseases caused by mutations of proteins involved in synaptic communication between neurons. &nbsp;</p>
<p>“This gives us a very nice tool to study those mutations and those disorders,” Volynski says. “We expect this to enable a major improvement in the specificity of stimulating neurons that have mutated synaptic proteins.”</p>
<p>The research was funded by the National Institutes of Health, France’s National Research Agency, the Simons Foundation for the Social Brain, the Human Frontiers Science Program, John Doerr, the Open Philanthropy Project, the Howard Hughes Medical Institute, and the Defense Advanced Research Projects Agency.</p>
MIT researchers have devised a way to control single neurons using optogenetics. To help achieve this, they developed an opsin, or light-sensitive protein, that can be targeted to neuron cell bodies (bottom row). Neurons in the top row have traditional opsins that are distributed throughout their axons.
Courtesy of the researchersResearch, Biological engineering, Brain and cognitive sciences, Media Lab, McGovern Institute, Optogenetics, School of Engineering, School of Science, School of Architecture and Planning, National Institutes of Health (NIH), Defense Advanced Research Projects Agency (DARPA)Promise seen in possible treatment for autism spectrum disorderhttp://news.mit.edu/2017/promise-seen-in-possible-treatment-for-autism-spectrum-disorder-asd-1031
Studies in mice show improved social interaction and cognition from a potential therapeutic for a syndrome that often results in autism. Tue, 31 Oct 2017 16:55:00 -0400Picower Institute for Learning and Memoryhttp://news.mit.edu/2017/promise-seen-in-possible-treatment-for-autism-spectrum-disorder-asd-1031<p>Human chromosome 16p11.2 deletion syndrome is caused by the absence of about 27 genes on chromosome&nbsp;16. This deletion is characterized by intellectual disability; impaired language, communication, and socialization skills;&nbsp;and autism spectrum disorder or ASD.</p>
<p>Research from the laboratories of <a href="http://picower.mit.edu/mark-bear" target="_blank">Mark Bear</a> at MIT and Jacqueline Crawley at the University of California at Davis, has identified a potential therapeutic for ASD. Researchers found that R-Baclofen reverses cognitive deficits and improves social interactions in two lines of 16p11.2 deletion mice.</p>
<p>The findings, <a href="http://www.nature.com/npp/journal/vaop/naam/abs/npp2017236a.html" target="_blank">published</a> in the journal <em>Neuropsychopharmacology, </em>have the potential to treat humans with 16p11.2 deletion syndrome and ASD.</p>
<p>“Our collaborative teams found that treatment with the drug R-baclofen improved scores on several learning and memory tasks, and on a standard assay of social behavior, in 16p11.2 mutant mice,” says&nbsp;Crawley, co-senior author of the paper along with Bear.</p>
<p>“This unique corroboration of findings by two independent labs, using two distinct lines of mice with the same mutation, increases confidence that R-baclofen may be an effective pharmacological treatment for some of the symptoms of human 16p11.2 deletion syndrome, including intellectual impairment and autism,” she says.</p>
<p>“These findings are particularly exciting on two fronts,” says Bear, who is the Picower Professor of Neuroscience at MIT. “First, the results show that diverse genetic causes of intellectual disability and autism may converge on a limited number of pathophysiological processes that can be ameliorated pharmacologically. Thus, a treatment for one genetically defined disorder may be beneficial for another with phenotypic overlap. Second, R-Baclofen has a well-understood safety profile and is well-tolerated in children and adults, making clinical studies feasible in the near future.”</p>
<p>Growing knowledge about genetic mutations in people with autism is enabling researchers to evaluate hypothesis-driven pharmacological interventions in terms of&nbsp;their ability to reverse the biological and behavioral consequence of specific mutations that cause autism. One of the genes in the 16p11.2 deletion region regulates the inhibitory neurotransmitter GABA. Researchers tested the hypothesis that increasing GABA neurotransmission using R-baclofen, which binds to GABA-B receptors, could reverse analogous behavioral symptoms in a mouse model of 16p11.2 deletion syndrome.</p>
<p>In the current paper, researchers report the results of animal model studies using two independently derived lines of mutant mice, each missing a chromosomal region analogous to human 16p11.2. Normal and mutant mice at both labs were tested after receiving R-baclofen in their drinking water on three tasks: novel object recognition, object location memory, and contextual recognition learning and memory. In addition, R-baclofen treated mutant mice scored better after treatment on each cognitive task than the untreated mutant mice. R-baclofen also increased scores on a standard assay of mouse social behaviors — male-female reciprocal social interactions — in the 16p11.2 mutant mice.</p>
<p>This study suggests that R-baclofen should be explored for the treatment of cognitive phenotypes in affected humans.</p>
In searching for a potential therapeutic for autism spectrum disorder, researchers have found that R-Baclofen reverses cognitive deficits and improves social interactions in two lines of 16p11.2 deletion mice.Image courtesy of the Picower Institute for Learning and Memory.School of Science, Autism, Brain and cognitive sciences, Genetics, Health sciences and technology, Picower Institute, ResearchMIT research laid groundwork for promising Alzheimer’s-fighting drinkhttp://news.mit.edu/2017/mit-research-laid-groundwork-promising-alzheimers-fighting-drink-1030
Studies by Richard Wurtman have led to development of nutrient mix shown to slow cognitive impairment in early stages of the disease.Mon, 30 Oct 2017 19:30:00 -0400Rob Matheson | MIT News Officehttp://news.mit.edu/2017/mit-research-laid-groundwork-promising-alzheimers-fighting-drink-1030<p>Much of Professor Emeritus Richard Wurtman’s career in MIT’s Department of Brain and Cognitive Sciences revolved around developing new treatments for diseases and conditions by modifying chemicals produced in the brain.</p>
<p>Since coming to MIT in 1970, Wurtman,&nbsp;the Cecil H. Green Distinguished Professor Emeritus, and his research group have generated more than 1,000 research articles and 200 patents, laying the groundwork for numerous successful medical products.</p>
<p>For example, the 3 million people in the United States who take melatonin as a sleeping aid are using a product that derives from research in Wurtman’s lab. “I’m very interested in using basic knowledge to ameliorate the human condition, to make living better,” says Wurtman, who is also a medical doctor.</p>
<p>Now a nutrient mix based on essential research contributions by Wurtman has shown promise in treating the early stages of Alzheimer’s disease, according to a new clinical trial funded by the European Union.</p>
<p>In the mid-2000s, Wurtman developed a nutrient cocktail aimed at treating what he considers “the root cause” of Alzheimer’s: loss of brain synapses. The mixture increases production of new synapses and restores connectivity between brain regions, improving memory and other cognitive functions. A French company then combined this research with a multinutrient it was developing along with the <a href="http://www.lipididiet.eu/index.php?id=6640">LipiDiDiet consortium</a> —&nbsp;a European collaboration of 16 universities and research centers — to create a drink, called Souvenaid, for Alzheimer’s patients.</p>
<p>Over the years, Souvenaid has been the focus on <a href="http://news.mit.edu/2008/alzheimers-treatment-shows-promise-clinical-trials">several</a> <a href="http://news.mit.edu/2010/fighting-alzheimers">clinical</a> <a href="http://news.mit.edu/2012/alzheimers-nutrient-mixture-0709">trials</a> to validate its efficacy. The mixture is not yet available in the United States, but it is being sold as a “medical food” — a category of regulated and safe foods that are designed for dietary management of diseases — in a number of countries across the globe.</p>
<p>In the new clinical trial, published in today’s issue of <em>Lancet Neurology</em>, patients with prodromal Alzheimer’s — the predementia stage of Alzheimer’s with mild symptoms — were given either Souvenaid or a placebo. Compared to people who drank the placebo, patients who drank Souvenaid throughout the trial showed less worsening in everyday cognitive and functional performance and significantly less atrophy of the hippocampus, which is caused early in Alzheimer’s by brain tissue loss.</p>
<p>“It feels like science-fiction, where you can take a drink of Souvenaid and you get more synapses … for improved cognitive function,” Wurtman says. “But it works.”</p>
<p>The co-authors of the study are from the University of Eastern Finland, Kuopio University Hospital, Karolinska Institutet and Karolinska University Hospital, the University of Masstricht, the VU University Medical Centre, Pentara Corporation, the University of Gothenburg, Sahlgrenska University Hospital, and Saarland University and the LipiDiDiet study group.</p>
<p>Other results of the study were mixed. The researchers say larger studies, involving more patients over a longer period of time, are still needed to determine if Souvenaid can actually slow progression of Alzheimer’s.</p>
<p><strong>Making Souvenaid</strong></p>
<p>Souvenaid’s popularity may be on the rise today, but the product would not be possible without years of MIT research, Wurtman says.</p>
<p>In the mid-2000s, Wurtman’s research led him to seek the mechanisms behind the body’s production of phosphatides, a class of lipids that, along with proteins, form biological membranes. Production of these phosphatides, Wurtman discovered, depends on a set of nutrient precursors.</p>
<p>Specifically, Wurtman homed in on three naturally occurring dietary compounds: choline, uridine, and the omega-3 fatty acid DHA. Choline is found in meats, nuts, and eggs. Fish, flaxseeds, and certain meats contain omega-3 fatty acids. Uridine is mostly produced in the liver.</p>
<p>All those compounds taken simultaneously boost production of phosphatides, encouraging membrane development, which is critical in creating new synapses. Knowing that Alzheimer’s-affected brains continuously lose synapses, Wurtman patented the work through MIT’s Technology Licensing Office in hopes of using some version of the cocktail to treat Alzheimer’s and any disease that leads to loss of synapses.</p>
<p>Then, in 2003, Wurtman presented the work at a meeting in Europe. Attending the event was a representative from Nutricia — a unit of Danone, a French company known as Dannon in the United States — which was experienced in making medical foods. Wurtman was invited to the company’s headquarters, where a deal was hashed out to combine Wurtman’s findings with a multinutrient the company was working on to create a new treatment for Alzheimer’s.</p>
<p>By 2008, Danone had licensed the patent and Souvenaid was already a product. But Wurtman and several graduate students continued basic research behind Souvenaid, which gave the product a boost. “We were much more able to do the basic research at MIT,” Wurtman says. “As soon as we found something in the research, we’d patent it. We never had the lag time. If you work in entrepreneurship and innovation that lag time could be the downfall of a prospective product.”</p>
<p>Among the group’s key discoveries was the finding that Souvenaid boosted the number of structures called dendritic spines, found in brain cells. When spines from one neuron contact another, a synapse is formed.</p>
<p>A 2010 <a href="http://news.mit.edu/2010/fighting-alzheimers">study</a> detailing those findings in<em> Alzheimer’s and Dementia</em> indicated that Souvenaid improved verbal memory in patients with mild Alzheimer’s. A 2012 <a href="http://news.mit.edu/2012/alzheimers-nutrient-mixture-0709">study</a> published in the <em>Journal of Alzheimer’s Disease</em> confirmed and expanded these findings. Over six months, patients with mild Alzheimer’s were given Souvenaid or a placebo. Patients taking the placebo deteriorated in their verbal-memory performance in the final three months of the study, while the Souvenaid patients continued to improve. Both trials were conducted by Philip Scheltens of the Alzheimer Center of the VU University Medical Centre in Amsterdam.</p>
<p><strong>Future of Souvenaid</strong></p>
<p>In the new clinical trial by the LipiDiDiet consortium, researchers conducted a 24-month trial, where more than 300 patients with prodromal Alzheimer’s were randomly assigned Souvenaid or a placebo. The patients taking Souvenaid showed about 45 percent less cognitive decline than people taking the placebo, according to a measure known as the clinical dementia rating sum of boxes.</p>
<p>But the surprising finding, Wurtman says, is that the patients taking Souvenaid showed a substantial reduction is the loss of hippocampal volume. In early stages of Alzheimer’s, the hippocampus — which plays an important role in memory — shrinks as tissue is destroyed. But rates of deterioration for those taking Souvenaid were about 26 percent lower than the control group.</p>
<p>“That’s remarkable,” Wurtman says. “I never would have guessed that something like that could happen. But if you suppress the loss of the hippocampus, it makes sense that you’d have better retention of cognitive function.”</p>
<p>The results indicate that Souvenaid may be able to slow or stop full progression of very early Alzheimer’s into full-blown Alzheimer’s, Wurtman says.</p>
<p>With this new study, Wurtman has high hopes for Souvenaid. First, he says the findings could encourage more researchers to view synapse restoration as a treatment for Alzheimer’s, which isn’t a popular area of study. Most research today, he says, focuses on reducing the accumulation of amyloid plaques or minimizing damage caused by toxic metabolites in Alzheimer’s-affected brains.</p>
<p>“Everyone who writes about Alzheimer’s knows there’s a synapse deficiency, and this impairs connections between brain regions,” he says. “Even if the amyloid or another problem gets solved, one way or another, you’ll have to replace these synapses.”</p>
<p>Wurtman also hopes the study will “catalyze the rapid appearance of Souvenaid in the American market” and become a very early treatment for suspected Alzheimer’s patients. Several potential biomarkers are being studied as indicators of early Alzheimer’s. But it’s somewhat useless to detect these biomarkers if nothing can be done about the disease at that point, Wurtman says. With Souvenaid, he says, that can change.</p>
<p>“Most people don’t do a biomarker test, because … there’s been nowhere to go from there. Now, it will be possible, I believe, for a doctor to tell a patient that, even though they have early Alzheimer’s, they can take Souvenaid chronically to suppress the development of the disease.”</p>
A nutrient mix based in part on research from the lab of MIT Professor Emeritus Richard Wurtman has shown promise in treating the early stages of Alzheimer’s disease.
Image: Donna CoveneyResearch, Innovation and Entrepreneurship (I&E), Brain and cognitive sciences, School of Science, Biology, Food, Health, Medicine, Disease, Alzheimer's, Memory, Drug development, Drug deliveryResearchers engineer CRISPR to edit single RNA letters in human cellshttp://news.mit.edu/2017/researchers-engineer-crispr-edit-single-rna-letters-human-cells-1015
“REPAIR” system edits RNA, rather than DNA; has potential to treat diseases without permanently affecting the genome. Wed, 25 Oct 2017 14:00:00 -0400Broad Institutehttp://news.mit.edu/2017/researchers-engineer-crispr-edit-single-rna-letters-human-cells-1015<p>The Broad Institute and MIT scientists who first harnessed CRISPR for mammalian genome editing have engineered a new molecular system for efficiently editing RNA in human cells. RNA editing, which can alter gene products without making changes to the genome, has profound potential as a tool for both research and disease treatment.</p>
<p>In a paper published today in <em>Science</em>, senior author Feng Zhang and his team describe the new CRISPR-based system, called RNA Editing for Programmable A to I Replacement, or “REPAIR.” The system can change single RNA nucleotides in mammalian cells in a programmable and precise fashion. REPAIR has the ability to reverse disease-causing mutations at the RNA level, as well as other potential therapeutic and basic science applications.</p>
<p>“The ability to correct disease-causing mutations is one of the primary goals of genome editing,” says Zhang, a core institute member of the Broad Institute, an investigator at the McGovern Institute, and the James and Patricia Poitras ’63 Professor in Neuroscience and associate professor in the departments of Brain and Cognitive Sciences and Biological Engineering at MIT. “So far, we’ve gotten very good at inactivating genes, but actually recovering lost protein function is much more challenging. This new ability to edit RNA opens up more potential opportunities to recover that function and treat many diseases, in almost any kind of cell.”</p>
<p>REPAIR has the ability to target individual RNA letters, or nucleosides, switching adenosines to inosines (read as guanosines by the cell). These letters are involved in single-base changes known to regularly cause disease in humans. In human disease, a mutation from G to A is extremely common; these alterations have been implicated in, for example, cases of focal epilepsy, Duchenne muscular dystrophy, and Parkinson’s disease. REPAIR has the ability to reverse the impact of any pathogenic G-to-A mutation regardless of its surrounding nucleotide sequence, with the potential to operate in any cell type.</p>
<p>Unlike the permanent changes to the genome required for DNA editing, RNA editing offers a safer, more flexible way to make corrections in the cell. “REPAIR can fix mutations without tampering with the genome, and because RNA naturally degrades, it’s a potentially reversible fix,” explains co-first author David Cox, a graduate student in Zhang’s lab.</p>
<p>To create REPAIR, the researchers systematically profiled the CRISPR-Cas13 enzyme family for potential “editor” candidates (unlike Cas9, the Cas13 proteins target and cut RNA). They selected an enzyme from <em>Prevotella</em> bacteria, called PspCas13b, which was the most effective at inactivating RNA. The team engineered a deactivated variant of PspCas13b that still binds to specific stretches of RNA but lacks its “scissor-like” activity, and fused it to a protein called ADAR2, which changes the letters A to I in RNA transcripts.</p>
<p>In REPAIR, the deactivated Cas13b enzyme seeks out a target sequence of RNA, and the ADAR2 element performs the base conversion without cutting the transcript or relying on any of the cell’s native machinery.</p>
<p>The team further modified the editing system to improve its specificity, reducing detectable off-target edits from 18,385 to only 20 in the whole transcriptome. The upgraded incarnation, REPAIRv2, consistently achieved the desired edit in 20 to 40 percent — and up to 51 percent — of a targeted RNA without signs of significant off-target activity. “The success we had engineering this system is encouraging, and there are clear signs REPAIRv2 can be evolved even further for more robust activity while still maintaining specificity,” says Omar Abudayyeh, co-first author and a graduate student in Zhang’s lab. Cox and Abudayyeh are both students in the Harvard-MIT Program in Health Sciences and Technology.</p>
<p>To demonstrate REPAIR’s therapeutic potential, the team synthesized the pathogenic mutations that cause Fanconi anemia and X-linked nephrogenic diabetes insipidus, introduced them into human cells, and successfully corrected these mutations at the RNA level. To push the therapeutic prospects further, the team plans to improve REPAIRv2’s efficiency and to package it into a delivery system appropriate for introducing REPAIRv2 into specific tissues in animal models.</p>
<p>The researchers are also working on additional tools for other types of nucleotide conversions. “There’s immense natural diversity in these enzymes,” says co-first author Jonathan Gootenberg, a graduate student in both Zhang’s lab and the lab of Broad core institute member Aviv Regev. “We’re always looking to harness the power of nature to carry out these changes.”</p>
<p>Zhang, along with the Broad Institute and MIT, plans to share the REPAIR system widely. As with earlier CRISPR tools, the groups will make this technology freely available for academic research via the <a href="https://www.addgene.org/Feng_Zhang/">Zhang lab’s page</a> on the plasmid-sharing website Addgene, through which the Zhang lab has already shared reagents more than 42,000 times with researchers at more than 2,200 labs in 61 countries, accelerating research around the world.</p>
<p>This research was funded, in part, by the National Institutes of Health and the Poitras Center for Affective Disorders Research.</p>
A new “REPAIR” system edits RNA, rather than DNA.
Image: Broad Communications, Susanna M. HamiltonCRISPR, Genome editing, DNA, RNA, Genetics, Research, Biological engineering, Broad Institute, McGovern Institute, Brain and cognitive sciences, School of Science, School of Engineering, Harvard-MIT Health Sciences and TechnologyDepartment of Biology hosts 2017 Massachusetts Junior Academy of Science Symposiumhttp://news.mit.edu/2017/mit-biology-hosts-massachusetts-junior-academy-science-symposium-1024
High school students present research projects to build communication skills while earning membership to the American Junior Academy of Science.Tue, 24 Oct 2017 14:50:00 -0400Raleigh McElvery | Department of Biologyhttp://news.mit.edu/2017/mit-biology-hosts-massachusetts-junior-academy-science-symposium-1024<p>On Oct. 14, 22 school students from across the state presented their research projects at the annual <a href="https://sites.google.com/site/massjraos/home" target="_blank">Massachusetts Junior Academy of Science</a>&nbsp;(MassJAS) Symposium.</p>
<p>The talks were split into two concurrent sessions based on subject: biological and environmental sciences;&nbsp;and engineering, chemistry, mathematics,&nbsp;and physics.&nbsp;Participants were selected based on merit and ranking in this year’s Massachusetts State Science and&nbsp;Engineering Fair.</p>
<p>Judges nominated three students from the biology session and four from physics and engineering as&nbsp;<a href="https://www.academiesofscience.org/" target="_blank">American Junior Academy of Science</a> (AJAS) delegates. Delegates are invited to attend the AJAS Convention, which will be held in Austin, Texas this coming February. The AJAS is a national honor society that meets annually in conjunction with the American Association for the Advancement of Science — the world’s largest science organization and the publisher of&nbsp;<em>Science</em>. All participants were inducted as AJAS fellows.</p>
<p>The&nbsp;sessions took place in adjacent lecture rooms in Building 68. The event was organized by Mandana Sassanfar, director of diversity and science outreach&nbsp;for MIT’s Department of&nbsp;Biology&nbsp;and Department of Brain and Cognitive Sciences, as well as the director of MassJAS. During the event, delegates toured local research institutions, shared their projects with others in the field, and attended conference sessions.&nbsp;</p>
<p>At this year’s MassJAS symposium, the jury for the biological and environmental science session was composed of three graduate students and postdoc&nbsp;from the MIT Department of Biology.</p>
<p>“I really enjoyed hearing how these projects came to be, and what inspired students to ask their respective research questions,” said Sora Kim, a third year graduate student in Tania Baker’s lab and a returning judge. “Some students did these projects at home, while others had collaborations with researchers at local universities. In many cases, these were their first science projects, so being able to understand their own projects and also convey their ideas to a more general audience is really important.”</p>
<p>First-time judge Summer Morrill, a third year graduate student in Angelika Amon’s lab, agreed that learning to present ideas clearly in ways that inspire others is key to the scientific process. “I was excited to hear what people at the high school level think is important in science, because they’re the next generation of scientists,” she said.&nbsp;</p>
<p>Each participant had ten minutes to present, followed by an audience question-and-answer session. The biology-related talks ranged from antimicrobial resistance to gene editing techniques to the effects of wifi&nbsp;router radiation.</p>
<p>Joshua Powers and Natalia Huynh, both juniors at the Everett High School STEM Academy, presented first, describing the results of their summer research project at MIT as part of the <a href="http://www.prweb.com/releases/2017/09/prweb14663854.htm" target="_blank">LEAH Knox Scholars</a> pilot program. Powers and Huynh pooled their findings, isolating and characterizing bacterial specimens from the Charles River.</p>
<p>“We’re friends and we both go to the same high school, so it was easy to collaborate with both our ideas and our data,” said Powers. “The LEAH Knox Scholars program was intense in that we had the chance to perform more advanced procedures with equipment we’ve never used before in school.”</p>
<p>Huynh also enjoyed tackling larger research questions with more refined tools, adding, “We practiced explaining our results this summer, so today’s presentation was similar to what we’d already done — but a little more intense because it was a competition.”</p>
<p>Nancy Cianchetta, who teaches biotechnology at Everett High School and serves as the coordinator for the STEM Academy, said Powers and Huynh will be part of the very first class to graduate from the Academy. She and many of her students have participated in <a href="https://biology.mit.edu/about/diversity" target="_blank">MIT biology outreach programs</a> over the years.</p>
<p>“I’ve taken my classes here for field trips and career exploration days, and many of my students come for the spring lecture series at the <a href="http://wi.mit.edu/programs/highschool" target="_blank">Whitehead Institute</a>,” she said. “The kids get so excited to come to MIT.”</p>
<p>While some participants shared data they’d only just begun to analyze, others had been tackling the same research question for over a year.</p>
<p>Evan Mizerak, a returning MassJAS Symposium winner and senior at Wachusett Regional High School, has spent the past two-and-a-half years collaborating with researchers at the University of Massachusetts Medical School on his project related to heritable infertility in fruit flies.</p>
<p>Mizerak attended last year’s AJAS Convention&nbsp;in Boston, as well as the MIT-sponsored <a href="http://news.mit.edu/2017/top-high-school-researchers-across-nation-meet-at-mit-0308" target="_self">Breakfast with Scientists</a>. This year,&nbsp;delegates met with esteemed faculty, including Institute Professor Phillip Sharp, the winner of the 1993 Nobel Prize in physiology or medicine and a member of the Department of Biology and the Koch Institute for Integrative Cancer Research.</p>
<p>“The AJAS Convention was incredible last year, because we had the chance to meet researchers from around the country — not just in and around Massachusetts,” he said. “At the Breakfast with Scientists, we met with Nobel Prize winners. Being introduced to people I consider celebrities was just amazing.”</p>
<p>“You wouldn’t expect anyone that famous to be interested in our work,” added Emma Kelly, a junior from Newton Country Day School and also a returning presenter. “But these professionals were genuinely curious, and often gave us ideas for new projects and things like that. It was such an incredible opportunity.”</p>
The 2017 MassJAS biological and environmental sciences presenters practiced communication skills and vied for the opportunity to become delegates at the AJAS annual meeting.Photo: Raleigh McElverySchool of Science, Biology, Special events and guest speakers, Brain and cognitive sciences, Whitehead Institute, Koch InstituteMIT neuroscientists build case for new theory of memory formationhttp://news.mit.edu/2017/neuroscientists-build-case-new-theory-memory-formation-1023
Existence of “silent engrams” suggests that existing models of memory formation should be revised.Mon, 23 Oct 2017 14:59:59 -0400Anne Trafton | MIT News Officehttp://news.mit.edu/2017/neuroscientists-build-case-new-theory-memory-formation-1023<p>Learning and memory are generally thought to be composed of three major steps: encoding events into the brain network, storing the encoded information, and later retrieving it for recall.</p>
<p>Two years ago, MIT neuroscientists discovered that under certain types of retrograde amnesia, memories of a particular event could be stored in the brain even though they could not be retrieved through natural recall cues. This phenomenon suggests that existing models of memory formation need to be revised, as the researchers propose in a new paper in which they further detail how these “silent engrams” are formed and re-activated.</p>
<p>The researchers believe their findings offer evidence that memory storage does not rely on the strengthening of connections, or “synapses,” between memory cells, as has long been thought. Instead, a pattern of connections that form between these cells during the first few minutes after an event occurs are sufficient to store a memory.</p>
<p>“One of our main conclusions in this study is that a specific memory is stored in a specific pattern of connectivity between engram cell ensembles that lie along an anatomical pathway. This conclusion is provocative because the dogma has been that a memory is instead stored by synaptic strength,” says Susumu Tonegawa, the Picower Professor of Biology and Neuroscience, the director of the RIKEN-MIT Center for Neural Circuit Genetics at the Picower Institute for Learning and Memory, and the study’s senior author.</p>
<p>The researchers also showed that even though memories held by silent engrams cannot be naturally recalled, the memories persist for at least a week and can be “awakened” days later by treating cells with a protein that stimulates synapse formation.</p>
<p>Dheeraj Roy, a recent MIT PhD recipient, is the lead author of the paper, which appears in the <em>Proceedings of the National Academy of Sciences</em> the week of Oct. 23. Other authors are MIT postdoc Shruti Muralidhar and technical associate Lillian Smith.</p>
<p><strong>Silent memories</strong></p>
<p>Neuroscientists have long believed that memories of events are stored when synaptic connections, which allow neurons to communicate with each other, are strengthened. Previous studies have found that if synthesis of certain cellular proteins is blocked in mice immediately after an event occurs, the mice will have no long-term memory of the event.</p>
<p>However, in a 2015 paper, <a href="http://news.mit.edu/2015/optogenetics-find-lost-memories-0528">Tonegawa and his colleagues showed</a> for the first time that memories could be stored even when synthesis of the cellular proteins is blocked. They found that while the mice could not recall those memories in response to natural cues, such as being placed in the cage where a fearful event took place, the memories were still there and could be artificially retrieved using a technique known as optogenetics.</p>
<p>The researchers have dubbed these memory cells “silent engrams,” and they have since found that these engrams can also be formed in other situations. In a study of mice with symptoms that mimic early Alzheimer’s disease, the <a href="http://news.mit.edu/2016/retrieve-missing-memories-early-alzheimers-symptoms-0316">researchers found</a> that while the mice had trouble recalling memories, those memories still existed and could be optogenetically retrieved.</p>
<p>In a more recent study of a process called systems consolidation of memory, the <a href="http://news.mit.edu/2017/neuroscientists-identify-brain-circuit-necessary-memory-formation-0406">researchers found engrams</a> in the hippocampus and the prefrontal cortex that encoded the same memory. However, the prefrontal cortex engrams were silent for about two weeks after the memory was initially encoded, while the hippocampal engrams were active right away. Over time, the memory in the prefrontal cortex became active, while the hippocampal engram slowly became silent.</p>
<p>In their new <em>PNAS</em> study, the researchers investigated further how these silent engrams are formed, how long they last, and how they can be re-activated.</p>
<p>Similar to their original 2015 study, they trained mice to fear being placed in a certain cage, by delivering a mild foot shock. After this training, the mice freeze when placed back in that cage. As the mice were trained, their memory cells were labeled with a light-sensitive protein that allows the cells to be re-activated with light. The researchers also inhibited the synthesis of cellular proteins immediately after the training occurred.</p>
<p>They found that after the training, the mice did not react when placed back in the cage where the training took place. However, the mice did freeze when the memory cells were activated with laser light while the animals were in a cage that should not have had any fearful associations. These silent memories could be activated by laser light for up to eight days after the original training.</p>
<p><strong>Making connections</strong></p>
<p>The findings offer support for Tonegawa’s new hypothesis that the strengthening of synaptic connections, while necessary for a memory to be initially encoded, is not necessary for its subsequent long-term storage. Instead, he proposes that memories are stored in the specific pattern of connections formed between engram cell ensembles. These connections, which form very rapidly during encoding, are distinct from the synaptic strengthening that occurs later (within a few hours of the event) with the help of protein synthesis.</p>
<p>“What we are saying is that even without new cellular protein synthesis, once a new connection is made, or a pre-existing connection is strengthened during encoding, that new pattern of connections is maintained,” Tonegawa says. “Even if you cannot induce natural memory recall, the memory information is still there.”</p>
<p>This raised a question about the purpose of the post-encoding protein synthesis. Considering that silent engrams are not retrieved by natural cues, the researchers believe the primary purpose of the protein synthesis is to enable natural recall cues to do their job efficiently.</p>
<p>The researchers also tried to reactivate the silent engrams by treating the mice with a protein called PAK1, which promotes the formation of synapses. They found that this treatment, given two days after the original event took place, was enough to grow new synapses between engram cells. A few days after the treatment, mice whose ability to recall the memory had been blocked initially would freeze after being placed in the cage where the training took place. Furthermore, their reaction was just as strong as that of mice whose memories had been formed with no interference.</p>
<p>Sheena Josselyn, an associate professor of psychology and physiology at the University of Toronto, said the findings run counter to the longstanding idea that memory formation involves strengthening of synapses between neurons and that this process requires protein synthesis.</p>
<p>“They showed that a memory formed during protein-synthesis inhibition may be artificially (but not naturally) recalled. That is, the memory is still retained in the brain without protein synthesis, but this memory cannot be accessed under normal conditions, suggesting that spines may not be the key keepers of information,” says Josselyn, who was not involved in the research. “The findings are controversial, but many paradigm-shifting papers are.”</p>
<p>Along with the researchers’ previous findings on silent engrams in early Alzheimer’s disease, this study suggests that re-activating certain synapses could help restore some memory recall function in patients with early stage Alzheimer’s disease, Roy says.</p>
<p>The research was funded by the RIKEN Brain Science Institute, the Howard Hughes Medical Institute, and the JPB Foundation.</p>
The green staining shows hippocampal CA1 engram cells, which store a long-term fear memory and have the light sensitive optogenetic protein channelrhodopsin-2. The blue staining shows all cells in the dorsal hippocampus brain region, including non-engram cells (blue color staining only).
Image: Dheeraj Roy, Tonegawa Lab/MITResearch, Memory, Brain and cognitive sciences, Picower Institute, School of Science, Neuroscience, Optogenetics, Alzheimer'sHow we determine who’s to blamehttp://news.mit.edu/2017/how-we-determine-blame-1017
Before assigning responsibility, our minds simulate alternative outcomes, study shows.Tue, 17 Oct 2017 11:59:59 -0400Anne Trafton | MIT News Officehttp://news.mit.edu/2017/how-we-determine-blame-1017<p>How do people assign a cause to events they witness? Some philosophers have suggested that people determine responsibility for a particular outcome by imagining what would have happened if a suspected cause had not intervened.</p>
<p>This kind of reasoning, known as counterfactual simulation, is believed to occur in many situations. For example, soccer referees deciding whether a player should be credited with an “own goal” — a goal accidentally scored for the opposing team — must try to determine what would have happened had the player not touched the ball.</p>
<p>This process can be conscious, as in the soccer example, or unconscious, so that we are not even aware we are doing it. Using technology that tracks eye movements, cognitive scientists at MIT have now obtained the first direct evidence that people unconsciously use counterfactual simulation to imagine how a situation could have played out differently.</p>
<p>“This is the first time that we or anybody have been able to see those simulations happening online, to count how many a person is making, and show the correlation between those simulations and their judgments,” says Josh Tenenbaum, a professor in MIT’s Department of Brain and Cognitive Sciences, a member of MIT’s Computer Science and Artificial Intelligence Laboratory, and the senior author of the new study.</p>
<p>Tobias Gerstenberg, a postdoc at MIT who will be joining Stanford’s Psychology Department as an assistant professor next year, is the lead author of the paper, which appears in the Oct. 17 issue of <em>Psychological Science</em>. Other authors of the paper are MIT postdoc Matthew Peterson, Stanford University Associate Professor Noah Goodman, and University College London Professor David Lagnado.</p>
<p><strong>Follow the ball</strong></p>
<p>Until now, studies of counterfactual simulation could only use reports from people describing how they made judgments about responsibility, which offered only indirect evidence of how their minds were working.</p>
<p>Gerstenberg, Tenenbaum, and their colleagues set out to find more direct evidence by tracking people’s eye movements as they watched two billiard balls collide. The researchers created 18 videos showing different possible outcomes of the collisions. In some cases, the collision knocked one of the balls through a gate; in others, it prevented the ball from doing so.</p>
<p>Before watching the videos, some participants were told that they would be asked to rate how strongly they agreed with statements related to ball A’s effect on ball B, such as, “Ball A caused ball B to go through the gate.” Other participants were asked simply what the outcome of the collision was.&nbsp;</p>
<p>As the subjects watched the videos, the researchers were able to track their eye movements using an infrared light that reflects off the pupil and reveals where the eye is looking. This allowed the researchers, for the first time, to gain a window into how the mind imagines possible outcomes that did not occur.</p>
<p>“What’s really cool about eye tracking is it lets you see things that you’re not consciously aware of,” Tenenbaum says. “When psychologists and philosophers have proposed the idea of counterfactual simulation, they haven’t necessarily meant that you do this consciously. It’s something going on behind the surface, and eye tracking is able to reveal that.”</p>
<p>The researchers found that when participants were asked questions about ball A’s effect on the path of ball B, their eyes followed the course that ball B would have taken had ball A not interfered. Furthermore, the more uncertainty there was as to whether ball A had an effect on the outcome, the more often participants looked toward ball B’s imaginary trajectory.</p>
<p>“It’s in the close cases where you see the most counterfactual looks. They’re using those looks to resolve the uncertainty,” Tenenbaum says.</p>
<p>Participants who were asked only what the actual outcome had been did not perform the same eye movements along ball B’s alternative pathway.</p>
<p>“The idea that causality is based on counterfactual thinking is an idea that has been around for a long time, but direct evidence is largely lacking,” says Phillip Wolff, an associate professor of psychology at Emory University, who was not involved in the research. “This study offers more direct evidence for that view.”</p>
<p><img alt="" src="/sites/mit.edu.newsoffice/files/eye_tracking_causality2_0.gif" style="width: 595px; height: 217px;" /></p>
<p><em><span style="font-size:10px;">In this video, two participants' eye-movements are tracked while they watch a video clip. The blue dot indicates where each participant is looking on the screen. The participant on the left was asked to judge whether they thought that ball B went through the middle of the gate. Participants asked this question mostly looked at the balls and tried to predict where ball B would go. The participant on the right was asked to judge whether ball A caused ball B to go through the gate. Participants asked this question tried to simulate where ball B would have gone if ball A hadn't been present in the scene. (Image: Tobias Gerstenberg)</span></em></p>
<p><strong>How people think</strong></p>
<p>The researchers are now using this approach to study more complex situations in which people use counterfactual simulation to make judgments of causality.</p>
<p>“We think this process of counterfactual simulation is really pervasive,” Gerstenberg says. “In many cases it may not be supported by eye movements, because there are many kinds of abstract counterfactual thinking that we just do in our mind. But the billiard-ball collisions lead to a particular kind of counterfactual simulation where we can see it.”</p>
<p>One example the researchers are studying is the following: Imagine ball C is headed for the gate, while balls A and B each head toward C. Either one could knock C off course, but A gets there first. Is B off the hook, or should it still bear some responsibility for the outcome?</p>
<p>“Part of what we are trying to do with this work is get a little bit more clarity on how people deal with these complex cases. In an ideal world, the work we’re doing can inform the notions of causality that are used in the law,” Gerstenberg says. “There is quite a bit of interaction between computer science, psychology, and legal science. We’re all in the same game of trying to understand how people think about causation.”</p>
<p>The research was funded by the National Science Foundation through MIT’s Center for Brains, Minds and Machines, and by the Office of Naval Research.</p>
“What’s really cool about eye tracking is it lets you see things that you’re not consciously aware of,” Professor Josh Tenenbaum says. “When psychologists and philosophers have proposed the idea of counterfactual simulation, they haven’t necessarily meant that you do this consciously. It’s something going on behind the surface, and eye tracking is able to reveal that.”
Research, Brain and cognitive sciences, Computer Science and Artificial Intelligence Laboratory (CSAIL), School of Engineering, School of Science, Center for Brains Minds and MachinesCellular reprograming implicated in model of Alzheimer&#039;s diseasehttp://news.mit.edu/2017/cellular-reprograming-implicated-in-model-of-alzheimers-disease-1012
Neuroscientists identify genetic changes in microglia in a mouse model of neurodegeneration and Alzheimer&#039;s disease.Thu, 12 Oct 2017 17:15:01 -0400Picower Institute for Learning and Memoryhttp://news.mit.edu/2017/cellular-reprograming-implicated-in-model-of-alzheimers-disease-1012<p>Microglia, immune cells that act as the central nervous system’s damage sensors, have recently been implicated in Alzheimer’s disease.</p>
<p>The cells, a type of macrophage that clear away dead cells from the brain and help to maintain healthy neuronal wiring, were found to be entangled with toxic amyloid beta plaques in tissue taken from those suffering from the disease.</p>
<p>Researchers had previously believed that the cells help to protect the brain from neurodegeneration by digesting the amyloid plaques, but it now appears the immune system may play a role in the progression of Alzheimer’s disease.</p>
<p>However, exactly what this role is, and how microglia are transformed from their protective state in healthy brains into a harmful state as the disease progresses, remains unclear.</p>
<p>To better understand microglia and how they respond in this way, a team led by MIT Professor <a href="http://picower.mit.edu/faculty/li-huei-tsai" target="_blank">Li-Huei Tsai</a>, director of MIT’s <a href="http://picower.mit.edu/" target="_blank">Picower Institute for Learning and Memory</a>, used single-cell RNA sequencing to study individual microglia cells. The research, published in the journal <a href="http://www.cell.com/cell-reports/fulltext/S2211-1247(17)31314-1" target="_blank"><em>Cell Reports</em></a>, represents the first time individual microglia have been studied in this way, according to Tsai.</p>
<p>“Right now, microglia are really in the spotlight for a number of neuro-system diseases, including Alzheimer’s, and also schizophrenia,” Tsai says. “However, there are still a lot of very basic things that we don’t know about microglia, such as whether cells in the healthy and diseased brain are all the same, or whether there are different groups, and how they become more inflammatory in the diseased state.”</p>
<p>The researchers used single-cell RNA sequencing to measure active gene expression in individual microglia cells in a mouse model of Alzheimer’s disease previously developed by Tsai’s lab. The mice were engineered so that the gene for a protein called p25 can be overstimulated in the brain, prompting the mice to develop symptoms very similar to those found in Alzheimer’s disease in humans.</p>
<p>The researchers used the technique to study what happens to microglia cells at various points in the progression of neurodegeneration. They measured the cells just before the induction of p25, and then one week, two weeks and six weeks after p25 induction, according to first author Hansruedi Mathys, a postdoc at the Picower Institute.</p>
<p>“This allowed us to follow how microglia respond to the progression of the disease, and the worsening conditions in the mouse brain,” Mathys says.</p>
<p>Surprisingly, they found that just one week after p25 induction, the microglia had already begun to respond to the threat by proliferating more than cells in the control mice.</p>
<p>“This means that the microglia must be able to sense some sort of perturbation in the mouse brain at a very early time point,” Mathys says.</p>
<p>They then looked at the response from the microglia at two and six weeks after p25 induction, and found that the cells had stopped proliferating, and had instead begun to mount a pronounced immune response. Indeed, they found that hundreds of genes relating to an immune response were activated at these later stages of disease progression.</p>
<p>“The microglia initially transition from a resting state into a proliferation state, after which they transition again into a mainly inflammatory state, with very high expression of genes with a very specific immune system function,” Tsai says.</p>
<p>They also discovered distinct groups of microglia, which were found only in the later stages of neurodegeneration. One type of microglia was found to only express interferon response genes, for example, while another only expressed histocompatibility complex class II (MHC) genes.</p>
<p>To find out if these distinct types of microglia overlap geographically, the researchers then performed a technique known as immunostaining to investigate where the populations of cells were distributed in the mouse brain. They stained different sections of the mouse hippocampus with antibodies that recognise particular gene products.</p>
<p>They found that the different types of microglia have very different distribution patterns.</p>
<p>The study is the first to combine single-cell RNA sequencing in microglia with an analysis of gene expression in an inducible model of neurodegeneration, says Tony Wyss-Coray, a professor of neurology and neurological sciences at Stanford University, who was not involved in the research.</p>
<p>“I find it particularly exciting that cells transition through different phases, seemingly following different gene expression programs,” Wyss-Coray says. “This confirms the suspicion that there is a lot of cellular heterogeneity in microglial response to damage, but this paper actually shows this for the first time in a temporal fashion.”</p>
<p>It is also intriguing that microglia show such a strong interferon and MHC class II response, as if reacting to viral infection, he says. “If this response turns out to be key to subsequent neuronal damage, it could provide a novel target for intervention.”</p>
<p>The researchers now hope to investigate whether the genes that are expressed by the microglia, including interferon response genes, might offer a potential new target for drug discovery.</p>
<p>“We’re planning to interfere with type I interferon signalling in this mouse model, to see if it has any beneficial effect on cognition, or disease pathology,” says Mathys.</p>
Microglia, the brain's immune cells, scavenge the brain for repairs and to remove potential infectious agents.Image: Picower Institute for Learning and MemoryResearch, Alzheimer's, Brain and cognitive sciences, Picower Institute, Memory, Immunology, School of ScienceBrain waves reflect different types of learninghttp://news.mit.edu/2017/brain-waves-reflect-different-types-learning-1011
For the first time, researchers have identified neural signatures of explicit and implicit learning. Wed, 11 Oct 2017 12:00:00 -0400Becky Ham | MIT News correspondenthttp://news.mit.edu/2017/brain-waves-reflect-different-types-learning-1011<p>Figuring out how to pedal a bike and memorizing the rules of chess require two different types of learning, and now for the first time, researchers have been able to distinguish each type of learning by the brain-wave patterns it produces.</p>
<p>These distinct neural signatures could guide scientists as they study the underlying neurobiology of how we both learn motor skills and work through complex cognitive tasks, says Earl K. Miller, the Picower Professor of Neuroscience at the&nbsp;Picower Institute for Learning and Memory&nbsp;and the Department of Brain and Cognitive Sciences, and senior author of a paper describing the findings in the Oct. 11 edition of <em>Neuron</em>.</p>
<p>When neurons fire, they produce electrical signals that combine to form brain waves that oscillate at different frequencies. “Our ultimate goal is to help people with learning and memory deficits,” notes Miller. “We might find a way to stimulate the human brain or optimize training techniques to mitigate those deficits.”</p>
<p>The neural signatures could help identify changes in learning strategies that occur in diseases such as Alzheimer’s, with an eye to diagnosing these diseases earlier or enhancing certain types of learning to help patients cope with the disorder, says Roman F. Loonis, a graduate student in the Miller Lab and first author of the paper. Picower Institute research scientist Scott L. Brincat and former MIT postdoc Evan G. Antzoulatos, now at the University of California at Davis, are co-authors.</p>
<p><strong>Explicit versus implicit learning</strong></p>
<p>Scientists used to think all learning was the same, Miller explains, until they learned about patients such as the famous Henry Molaison or “H.M.,” who developed severe amnesia in 1953 after having part of his brain removed in an operation to control his epileptic seizures. Molaison couldn’t remember eating breakfast a few minutes after the meal, but he was able to learn and retain motor skills that he learned, such as tracing objects like a five-pointed star in a mirror.</p>
<p>“H.M. and other amnesiacs got better at these skills over time, even though they had no memory of doing these things before,” Miller says.</p>
<p>The divide revealed that the brain engages in two types of learning and memory — explicit and implicit.</p>
<p>Explicit learning “is learning that you have conscious awareness of, when you think about what you’re learning and you can articulate what you’ve learned, like memorizing a long passage in a book or learning the steps of a complex game like chess,” Miller explains.</p>
<p>“Implicit learning is the opposite. You might call it motor skill learning or muscle memory, the kind of learning that you don’t have conscious access to, like learning to ride a bike or to juggle,” he adds. “By doing it you get better and better at it, but you can’t really articulate what you’re learning.”</p>
<p>Many tasks, like learning to play a new piece of music, require both kinds of learning, he notes.</p>
<p><strong>Brain waves from earlier studies</strong></p>
<p>When the MIT researchers studied the behavior of animals learning different tasks, they found signs that different tasks might require either explicit or implicit learning. In tasks that required comparing and matching two things, for instance, the animals appeared to use both correct and incorrect answers to improve their next matches, indicating an explicit form of learning. But in a task where the animals learned to move their gaze one direction or another in response to different visual patterns, they only improved their performance in response to correct answers, suggesting implicit learning.</p>
<p>What’s more, the researchers found, these different types of behavior are accompanied by different patterns of brain waves.</p>
<p>During explicit learning tasks, there was an increase in alpha2-beta brain waves (oscillating at 10-30 hertz) following a correct choice, and an increase delta-theta waves (3-7 hertz) after an incorrect choice. The alpha2-beta waves increased with learning during explicit tasks, then decreased as learning progressed. The researchers also saw signs of a neural spike in activity that occurs in response to behavioral errors, called event-related negativity, only in the tasks that were thought to require explicit learning.</p>
<p>The increase in alpha-2-beta brain waves during explicit learning “could reflect the building of a model of the task,” Miller explains. “And then after the animal learns the task, the alpha-beta rhythms then drop off, because the model is already built.”</p>
<p>By contrast, delta-theta rhythms only increased with correct answers during an implicit learning task, and they decreased during learning. Miller says this pattern could reflect neural “rewiring” that encodes the motor skill during learning.</p>
<p>“This showed us that there are different mechanisms at play during explicit versus implicit learning,” he notes.</p>
<p><strong>Future Boost to Learning</strong></p>
<p>Loonis says the brain wave signatures might be especially useful in shaping how we teach or train a person as they learn a specific task. “If we can detect the kind of learning that’s going on, then we may be able to enhance or provide better feedback for that individual,” he says. “For instance, if they are using implicit learning more, that means they’re more likely relying on positive feedback, and we could modify their learning to take advantage of that.”</p>
<p>The neural signatures could also help detect disorders such as Alzheimer’s disease at an earlier stage, Loonis says. “In Alzheimer’s, a kind of explicit fact learning disappears with dementia, and there can be a reversion to a different kind of implicit learning,” he explains. “Because the one learning system is down, you have to rely on another one.”</p>
<p>Earlier studies have shown that certain parts of the brain such as the hippocampus are more closely related to explicit learning, while areas such as the basal ganglia are more involved in implicit learning. But Miller says that the brain wave study indicates “a lot of overlap in these two systems. They share a lot of the same neural networks.”</p>
<p>The research was funded by the National Institute of Mental Health and the Picower Institute Innovation Fund.</p>
Memory, Learning, Brain and cognitive sciences, Neuroscience, Picower Institute, Research, School of ScienceTen researchers from MIT and Broad receive NIH Director’s Awardshttp://news.mit.edu/2017/ten-mit-and-broad-researchers-receive-nih-director-awards-1005
Awards support high-risk, high-impact biomedical research.
Thu, 05 Oct 2017 14:55:01 -0400Julie Pryor | McGovern Institute for Brain Researchhttp://news.mit.edu/2017/ten-mit-and-broad-researchers-receive-nih-director-awards-1005<p>The <a href="https://commonfund.nih.gov/highrisk" target="_blank">High-Risk, High-Reward Research (HRHR) program</a>, supported by the National Institutes of Health (NIH) Common Fund, has awarded 86 grants to scientists with unconventional approaches to major challenges in biomedical and behavioral research. Ten of the awardees are affiliated with MIT and the Broad Institute of MIT and Harvard.</p>
<p>The NIH typically supports research projects, not individual scientists, but the HRHR program identifies specific researchers with innovative ideas to address gaps in biomedical research. The program issues four types of awards annually — the <a href="https://commonfund.nih.gov/pioneer/" target="_blank">Pioneer Award</a>, the <a href="https://commonfund.nih.gov/newinnovator/" target="_blank">New Innovator Award</a>, the <a href="https://commonfund.nih.gov/TRA/" target="_blank">Transformative Research Award</a> and the <a href="https://commonfund.nih.gov/earlyindependence/" target="_blank">Early Independence Award</a> — to “high-caliber investigators whose ideas stretch the boundaries of our scientific knowledge.”</p>
<p>Four researchers who are affiliated with either MIT or the Broad Institute received this year’s New Innovator Awards, which support “unusually innovative research” from early career investigators. They are:</p>
<ul>
<li><a href="https://www.broadinstitute.org/bios/paul-blainey" target="_blank">Paul Blainey</a>, an MIT assistant professor of biological engineering and a core member of the Broad Institute, is an expert in microanalysis systems for studies of individual molecules and cells. The award will fund the establishment a new technology that enables advanced readout from living cells.</li>
<li><a href="https://www.media.mit.edu/people/esvelt/overview/" target="_blank">Kevin Esvelt</a>, an associate professor of media arts and sciences at MIT’s Media Lab, invents new ways to study and influence the evolution of ecosystems. Esvelt plans to use the NIH grant to develop powerful “daisy drive” systems for more precise genetic alterations of wild organisms. Such an intervention has the potential to serve as a powerful weapon against malaria, Zika, Lyme disease, and many other infectious diseases.</li>
<li><a href="http://macoskolab.com/" target="_blank">Evan Macosko</a> is an associate member of the Broad Institute who develops molecular techniques to more deeply understand the function of cellular specialization in the nervous system. Macosko’s award will fund a novel technology, Slide-seq, which enables genome-wide expression analysis of brain tissue sections at single-cell resolution.</li>
<li><a href="http://www.schlaucohenlab.com/" target="_blank">Gabriela Schlau-Cohen</a>, an MIT assistant professor of chemistry, combines tools from chemistry, optics, biology, and microscopy to develop new approaches to study the dynamics of biological systems. Her award will be used to fund the development of a new nanometer-distance assay that directly accesses protein motion with unprecedented spatiotemporal resolution under physiological conditions.</li>
</ul>
<p>Recipients of the Early Independence Award include three Broad Institute Fellows. The award recognizes “exceptional junior scientists” with an opportunity to skip traditional postdoctoral training and move immediately into independent research positions.</p>
<ul>
<li>Ahmed Badran is a Broad Institute Fellow who studies the function of ribosomes and the control of protein synthesis. Ribosomes are important targets for antibiotics, and the NIH award will support the development of a new technology platform for probing ribosome function within living cells.</li>
<li><a href="https://www.insitubiology.org/people/" target="_blank">Fei Chen</a>, a Broad Institute Fellow who is also a research affiliate at MIT’s McGovern Institute for Brain Research, has pioneered novel molecular and microscopy tools to illuminate biological pathways and function. He will use one of these tools, expansion microscopy, to explore the molecular basis of glioblastomas, an aggressive form of brain cancer.</li>
<li><a href="http://web.mit.edu/hilaryf/www/">Hilary Finucane</a>, a Broad Institute Fellow who recently received her PhD from MIT’s Department of Mathematics, develops computational methods for analyzing biological data. She plans to develop methods to analyze large-scale genomic data to identify disease-relevant cell types and tissues, a necessary first step for understanding molecular mechanisms of disease.</li>
</ul>
<p>Among the recipients of the NIH’s Pioneer Awards are <a href="https://tyelab.mit.edu/">Kay Tye</a>,<strong> </strong>an assistant professor of brain and cognitive sciences at MIT and a member of MIT’s Picower Institute for Learning and Memory, and <a href="http://zlab.mit.edu/">Feng Zhang</a>, the James and Patricia Poitras ’63 Professor in Neuroscience, an associate professor of brain and cognitive sciences and biological engineering at MIT, a core member of the Broad Institute, and an investigator at MIT’s McGovern Institute for Brain Research. Recipients of this award are challenged to pursue “groundbreaking, high-impact approaches to a broad area of biomedical or behavioral science. Tye, who studies the brain mechanisms underlying emotion and behavior, will use her award to look at the neural representation of social homeostasis and social rank. Zhang, who pioneered the gene-editing technology known as <a href="http://mcgovern.mit.edu/news/videos/genome-editing-with-crispr-cas9/" target="_blank">CRISPR</a>, plans to develop a suite of tools designed to achieve precise genome surgery for repairing disease-causing changes in DNA.</p>
<p><a href="http://syntheticneurobiology.org/" target="_blank">Ed Boyden</a><strong>, </strong>an associate professor of brain and cognitive sciences and biological engineering at MIT, and a member of MIT’s Media Lab and McGovern Institute for Brain Research, is a recipient of the Transformative Research Award. This award promotes “cross-cutting, interdisciplinary approaches that could potentially create or challenge existing paradigms.” Boyden, who develops new strategies for understanding and engineering brain circuits, will use the grant to develop high-speed 3-D imaging of neural activity.</p>
<p>This year, the NIH issued a total of 12 Pioneer Awards, 55 New Innovator Awards, 8 Transformative Research Awards, and 11 Early Independence Awards. The awards total $263 million and represent contributions from the NIH Common Fund; National Institute of General Medical Sciences; National Institute of Mental Health; National Center for Complementary and Integrative Health; and National Institute of Dental and Craniofacial Research.</p>
<p>“I continually point to this program as an example of the creative and revolutionary research NIH supports,” said NIH Director Francis S. Collins. “The quality of the investigators and the impact their research has on the biomedical field is extraordinary.”</p>
Ten from MIT and the Broad Institute recently received NIH Director's Awards. Top row (l-r): Feng Zhang, Kay Tye, Kevin Esvelt, Fei Chen; Bottom row (l-r): Gabriela Schlau-Cohen, Ahmed Badran, Hilary Finucane, Ed Boyden. Not pictured: Paul Blainey and Evan Macosko. Awards, honors and fellowships, Funding, Grants, Faculty, National Institutes of Health (NIH), McGovern Institute, Media Lab, Picower Institute, Broad Institute, Biological engineering, Brain and cognitive sciences, Chemistry, Mathematics, School of Science, School of Engineering, School of Architecture and Planning, Graduate, postdoctoralBabies can learn that hard work pays offhttp://news.mit.edu/2017/babies-try-harder-seeing-adults-0921
Study finds infants try harder after seeing adults struggle to achieve a goal.Thu, 21 Sep 2017 13:59:59 -0400Anne Trafton | MIT News Officehttp://news.mit.edu/2017/babies-try-harder-seeing-adults-0921<p>If at first you don’t succeed, try, try again.</p>
<p>A new study from MIT reveals that babies as young as 15 months can learn to follow this advice. The researchers found that babies who watched an adult struggle at two different tasks before succeeding tried harder at their own difficult task, compared to babies who saw an adult succeed effortlessly.</p>
<p>The study suggests that infants can learn the value of effort after seeing just a couple of examples of adults trying hard, although the researchers have not studied how long the effect lasts. Although the study took place in a laboratory setting, the findings may offer some guidance for parents who hope to instill the value of effort in their children, the researchers say.</p>
<p>“There’s some pressure on parents to make everything look easy and not get frustrated in front of their children,” says Laura Schulz, a professor of cognitive science at MIT. “There’s nothing you can learn from a laboratory study that directly applies to parenting, but this does at least suggest that it may not be a bad thing to show your children that you are working hard to achieve your goals.”</p>
<p>Schulz is the senior author of the study, which appears in the Sept. 21 online edition of <em>Science</em>. Julia Leonard, an MIT graduate student, is the first author of the paper, and MIT undergraduate Yuna Lee is also an author.</p>
<p><strong>Putting in the effort</strong></p>
<p>Many recent studies have explored the value of hard work. Some have found that children’s persistence, or “grit,” can predict success above and beyond what IQ predicts. Other studies have found that children’s beliefs regarding effort also matter: Those who think putting in effort leads to better outcomes do better in school than those who believe success depends on a fixed level of intelligence.</p>
<p>Leonard and Schulz were interested in studying how children might learn, at a very early age, how to decide when to try hard and when it’s not worth the effort. Schulz’ previous work has shown that babies can learn causal relationships from just a few examples.</p>
<p>“We were wondering if they can do similar fast learning from a little bit of data about when effort is really worth it,” Leonard says.</p>
<p>To do that, they designed an experiment in which 15-month-old babies first watched an adult perform two tasks: removing a toy frog from a container and removing a key chain from a carabiner. Half of the babies saw the adult quickly succeed at the task three times within 30 seconds, while the other half saw her struggle for 30 seconds before succeeding.</p>
<p>The experimenter then showed the baby a musical toy. This toy had a button that looked like it should turn the toy on but actually did not work; there was also a concealed, functional button on the bottom. Out of the baby’s sight, the researcher turned the toy on, to demonstrate that it played music, then turned it off and gave it to the baby.</p>
<p>Each baby was given two minutes to play with the toy, and the researchers recorded how many times the babies tried to press the button that seemed like it should turn the toy on. They found that babies who had seen the experimenter struggle before succeeding pressed the button nearly twice as many times overall as those who saw the adult easily succeed. They also pressed it nearly twice as many times before first asking for help or tossing the toy.</p>
<p>“There wasn’t any difference in how long they played with the toy or in how many times they tossed it to their parent,” Leonard says. “The real difference was in the number of times they pressed the button before they asked for help and in total.”</p>
<p>The researchers also found that direct interactions with the babies made a difference. When the experimenter said the infants’ names, made eye contact with them, and talked directly to them, the babies tried harder than when the experimenter did not directly engage with the babies.</p>
<p>“What we found, consistent with many other studies, is that using those pedagogical cues is an amplifier. The effect doesn’t vanish, but it becomes much weaker without those cues,” Schulz says.</p>
<p><strong>A limited resource</strong></p>
<p>A key takeaway from the study is that people appear to be able to learn, from an early age, how to make decisions regarding effort allocation, the researchers say.</p>
<p>“We’re a somewhat puritanical culture, especially here in Boston. We value effort and hard work,” Schulz says. “But really the point of the study is you don’t actually want to put in a lot of effort across the board. Effort is a limited resource. Where do you deploy it, and where do you not?”</p>
<p>Kiley Hamlin, an associate professor of psychology at the University of British Columbia, described the study as “a lovely demonstration that something we have long thought critical to older childrens’ and adults' likelihood of achieving success in school and in life — persistence on task — can be influenced in infants in the first half of the second year.”</p>
<p>Hamlin, who was not involved in the study, said the findings suggest two important things: “First, infants seem to be learning something about persistence in general, rather than on how to best solve task A or task B specifically. Second, influencing our infants' persistence, at least in the short term, might (ironically) take relatively little effort on our part.”</p>
<p>The researchers hope to investigate how long this effect might last after the initial experiment. Another possible avenue of research is whether the effect would be as strong with different kinds of tasks — for example, if it was less clear to the babies what the adult was trying to achieve, or if the babies were given toys that were meant for older children.</p>
<p>The research was funded by the National Science Foundation Graduate Research Fellowship Program, the MIT Center for Brains, Minds and Machines, and the Simons Center for the Social Brain.</p>
Researchers found that babies who watched an adult struggle to complete tasks before succeeding tried harder at their own difficult task, compared to babies who saw an adult succeed without effort.
Research, Brain and cognitive sciences, Learning, Behavior, School of Science, National Science Foundation (NSF), Center for Brains Minds and MachinesGene-editing technology developer Feng Zhang awarded $500,000 Lemelson-MIT Prizehttp://news.mit.edu/2017/feng-zhang-awarded-lemelson-mit-prize-0919
MIT associate professor and member of the Broad Institute and McGovern Institute recognized for commitment to invention, collaboration, and mentorship.
Tue, 19 Sep 2017 06:00:00 -0400Lemelson-MIT Programhttp://news.mit.edu/2017/feng-zhang-awarded-lemelson-mit-prize-0919<p>Feng Zhang, a pioneer of the revolutionary CRISPR gene-editing technology, TAL effector proteins, and optogenetics, is the recipient of the 2017 $500,000 Lemelson-MIT Prize, the largest cash prize for invention in the United States. Zhang is a core member of the Broad Institute of MIT and Harvard, an investigator at the McGovern Institute for Brain Research, the James and Patricia Poitras Professor in Neuroscience at MIT, and associate professor in the departments of Brain and Cognitive Sciences and Biological Engineering at MIT.</p>
<p>Zhang and his team were first to develop and demonstrate successful methods for using an engineered CRISPR-Cas9 system to edit genomes in living mouse and human cells and have turned CRISPR technology into a practical and shareable collection of tools for robust gene editing and epigenomic manipulation. CRISPR, short for Clustered Regularly Interspaced Short Palindromic Repeats, has been harnessed by Zhang and his team as a groundbreaking gene-editing tool that is simple and versatile to use. A key tenet of Zhang’s is to encourage further development and research through open sharing of tools and scientific collaboration. Zhang believes that wide use of CRISPR-based tools will further our understanding of biology, allowing scientists to identify genetic differences that contribute to diseases and, eventually, provide the basis for new therapeutic techniques.</p>
<p>Zhang’s lab has trained thousands of researchers to use CRISPR technology, and since 2013 he has shared over 40,000 plasmid samples with labs around the world both directly and through the nonprofit Addgene, enabling wide use of his CRISPR tools in their research.</p>
<p>Zhang began working in a gene therapy laboratory at the age of 16 and has played key roles in the development of multiple technologies. Prior to harnessing CRISPR-Cas9, Zhang engineered microbial TAL effectors (TALEs) for use in mammalian cells, working with colleagues at Harvard University, authoring multiple publications on the subject and becoming a co-inventor on several patents on TALE-based technologies. Zhang was also a key member of the team at Stanford University that harnessed microbial opsins for developing optogenetics, which uses light signals and light-sensitive proteins to monitor and control activity in brain cells. This technology can help scientists understand how cells in the brain affect mental and neurological illnesses. Zhang has co-authored multiple publications on optogenetics and is a co-inventor on several patents related to this technology.</p>
<p>Zhang’s numerous scientific discoveries and inventions, as well as his commitment to mentorship and collaboration, earned him the Lemelson-MIT Prize, which honors outstanding mid-career inventors who improve the world through technological invention and demonstrate a commitment to mentorship in science, technology, engineering and mathematics (STEM).</p>
<p>“Feng’s creativity and dedication to problem-solving impressed us,” says Stephanie Couch, executive director of the Lemelson-MIT Program. “Beyond the breadth of his own accomplishments, Feng and his lab have also helped thousands of scientists across the world access the new technology to advance their own scientific discoveries.”</p>
<p>“It is a tremendous honor to receive the Lemelson-MIT Prize and to join the company of so many incredibly impactful inventors who have won this prize in years past,” says Zhang. “Invention has always been a part of my life; I think about new problems every day and work to solve them creatively. This prize is a testament to the passionate work of my team and the support of my family, teachers, colleagues and counterparts around the world.”</p>
<p>The $500,000 prize, which bears no restrictions in how it can be used, is made possible through the support of The Lemelson Foundation, the world’s leading funder of invention in service of social and economic change.</p>
<p>“We are thrilled to honor Dr. Zhang, who we commend for his advancements in genetics, and more importantly, his willingness to share his discoveries to advance the work of others around the world,” says Dorothy Lemelson, chair of The Lemelson Foundation. “Zhang’s work is inspiring a new generation of inventors to tackle the biggest problems of our time.”</p>
<p>Zhang will speak at <a href="https://events.technologyreview.com/emtech/17/" target="_blank">EmTech MIT</a>, the annual conference on emerging technologies hosted by <em><a href="http://www.technologyreview.com/" target="_blank">MIT Technology Review</a></em> at the MIT Media Lab on Tuesday, Nov. 7.</p>
<p>The Lemelson-MIT Program is now seeking nominations for the 2018 $500,000 Lemelson-MIT Prize. Please contact the Lemelson-MIT Program at <a href="mailto:awards-lemelson@mit.edu?subject=Lemelson-MIT%20Prize">awards-lemelson@mit.edu</a> for more information or visit the <a href="http://lemelson.mit.edu/prize" target="_blank">MIT-Lemelson Prize website</a>.</p>
2017 $500,000 Lemelson-MIT Prize winner Feng ZhangPhoto: Justin Knight/McGovern InstituteAwards, honors and fellowships, Faculty, CRISPR, Genome editing, DNA, Genetics, Lemelson-MIT, Biological engineering, Brain and cognitive sciences, School of Science, School of Engineering, Invention, McGovern Institute, Broad InstituteAnalyzing the language of colorhttp://news.mit.edu/2017/analyzing-language-color-0918
Cognitive scientists find that people can more easily communicate warmer colors than cool ones.Mon, 18 Sep 2017 15:00:00 -0400Anne Trafton | MIT News Officehttp://news.mit.edu/2017/analyzing-language-color-0918<p>The human eye can perceive millions of different colors, but the number of categories human languages use to group those colors is much smaller. Some languages use as few as three color categories (words corresponding to black, white, and red), while the languages of industrialized cultures use up to 10 or 12 categories.</p>
<p>In a new study, MIT cognitive scientists have found that languages tend to divide the “warm” part of the color spectrum into more color words, such as orange, yellow, and red, compared to the “cooler” regions, which include blue and green. This pattern, which they found across more than 100 languages, may reflect the fact that most objects that stand out in a scene are warm-colored, while cooler colors such as green and blue tend to be found in backgrounds, the researchers say.</p>
<p>This leads to more consistent labeling of warmer colors by different speakers of the same language, the researchers found.</p>
<p>“When we look at it, it turns out it’s the same across every language that we studied. Every language has this amazing similar ordering of colors, so that reds are more consistently communicated than greens or blues,” says Edward Gibson, an MIT professor of brain and cognitive sciences and the first author of the study, which appears in the <em>Proceedings of the National Academy of Sciences</em> the week of Sept. 18.</p>
<p>The paper’s other senior author is Bevil Conway, an investigator at the National Eye Institute (NEI). Other authors are MIT postdoc Richard Futrell, postdoc Julian Jara-Ettinger, former MIT graduate students Kyle Mahowald and Leon Bergen, NEI postdoc Sivalogeswaran Ratnasingam, MIT research assistant Mitchell Gibson, and University of Rochester Assistant Professor Steven Piantadosi.</p>
<div class="cms-placeholder-content-video"></div>
<p><strong>Color me surprised</strong></p>
<p>Gibson began this investigation of color after accidentally discovering during another study that there is a great deal of variation in the way colors are described by members of the Tsimane’, a tribe that lives in remote Amazonian regions of Bolivia. He found that most Tsimane’ consistently use words for white, black, and red, but there is less agreement among them when naming colors such as blue, green, and yellow.</p>
<p>Working with Conway, who was then an associate professor studying visual perception at Wellesley College, Gibson decided to delve further into this variability. The researchers asked about 40 Tsimane’ speakers to name 80 color chips, which were evenly distributed across the visible spectrum of color.</p>
<p>Once they had these data, the researchers applied an information theory technique that allowed them to calculate a feature they called “surprisal,” which is a measure of how consistently different people describe, for example, the same color chip with the same color word.</p>
<p>When a particular word (such as “blue” or “green”) is used to describe many color chips, then one of these chips has higher surprisal. Furthermore, chips that people tend to label consistently with just one word have a low surprisal rate, while chips that different people tend to label with different words have a higher surprisal rate. The researchers found that the color chips labeled in Tsimane’, English, and Spanish were all ordered such that cool-colored chips had higher average surprisals than warm-colored chips (reds, yellows, and oranges).</p>
<p>The researchers then compared their results to data from the World Color Survey, which performed essentially the same task for 110 languages around the world, all spoken by nonindustrialized societies. Across all of these languages, the researchers found the same pattern.</p>
<p>This reflects the fact that while the warm colors and cool colors occupy a similar amount of space in a chart of the 80 colors used in the test, most languages divide the warmer regions into more color words than the cooler regions. Therefore, there are many more color chips that most people would call “blue” than there are chips that people would define as “yellow” or “red.”</p>
<p>“What this means is that human languages divide that space in a skewed way,” Gibson says. “In all languages, people preferentially bring color words into the warmer parts of the space and they don’t bring them into the cooler colors.”</p>
<p><strong>Colors in the forefront</strong></p>
<p>To explore possible explanations for this trend, the researchers analyzed a database of 20,000 images collected and labeled by Microsoft, and they found that objects in the foreground of a scene are more likely to be a warm color, while cooler colors are more likely to be found in backgrounds.</p>
<p>“Warm colors are in the foreground, they’re all the stuff that we interact with and want to talk about,” Gibson says. “We need to be able to talk about things which are identical except for their color: objects.”</p>
<p>Gibson now hopes to study languages spoken by societies found in snowy or desert climates, where background colors are different, to see if their color naming system is different from what he found in this study.</p>
<p>Julie Sedivy, an adjunct associate professor of psychology at the University of Calgary, says the paper makes an important contribution to scientists’ ability to study questions such as how culture and language influence how people perceive the world.</p>
<p>“It’s a big step forward in establishing a more rigorous approach to asking really important questions that in the past have been addressed in a scientifically flimsy way,” says Sedivy, who was not part of the research team. She added that this approach could also be used to study other attributes that are represented by varying numbers of words in different languages, such as odors, tastes, and emotions.</p>
<p>The research was funded by the National Institutes of Health and the National Science Foundation.</p>
MIT researchers have found that languages tend to divide the "warm" part of the color spectrum into more color words than the "cooler" regions, which makes communication of warmer colors more consistent. From left to right, this chart shows the order of most to least efficiently communicated colors, in English, Spanish, and Tsimane' languages.
Courtesy of the researchers (edited by MIT News)Research, Brain and cognitive sciences, Language, Behavior, School of Science, National Institutes of Health (NIH), National Science Foundation (NSF)Studies help explain link between autism, severe infection during pregnancyhttp://news.mit.edu/2017/studies-explain-link-between-autism-severe-infection-during-pregnancy-0913
Bacterial populations in mother’s GI tract may play a central role.Wed, 13 Sep 2017 12:59:59 -0400Anne Trafton | MIT News Officehttp://news.mit.edu/2017/studies-explain-link-between-autism-severe-infection-during-pregnancy-0913<p>Mothers who experience an infection severe enough to require hospitalization during pregnancy are at higher risk of having a child with autism. Two new studies from MIT and the University of Massachusetts Medical School shed more light on this phenomenon and identify possible approaches to preventing it.</p>
<p>In research on mice, the researchers found that the composition of bacterial populations in the mother’s digestive tract can influence whether maternal infection leads to autistic-like behaviors in offspring. They also discovered the specific brain changes that produce these behaviors.</p>
<p>“We identified a very discrete brain region that seems to be modulating all the behaviors associated with this particular model of neurodevelopmental disorder,” says Gloria Choi, the Samuel A. Goldblith Career Development Assistant Professor of Brain and Cognitive Sciences and a member of MIT’s McGovern Institute for Brain Research.</p>
<p>If further validated in human studies, the findings could offer a possible way to reduce the risk of autism, which would involve blocking the function of certain strains of bacteria found in the maternal gut, the researchers say.</p>
<p>Choi and Jun Huh, formerly an assistant professor at UMass Medical School who is now a faculty member at Harvard Medical School, are the senior authors of both papers, which appear in <em>Nature</em> on Sept. 13. MIT postdoc Yeong Shin Yim is the first author of one paper, and UMass Medical School visiting scholars Sangdoo Kim and Hyunju Kim are the lead authors of the other.</p>
<p><strong>Reversing symptoms</strong></p>
<p>A 2010 study that included all children born in Denmark between 1980 and 2005 found that severe viral infections during the first trimester of pregnancy translated to a threefold risk for autism, and serious bacterial infections during the second trimester were linked with a 1.42-fold increase in risk. These infections included influenza, viral gastroenteritis, and severe urinary tract infections.</p>
<p>Similar effects have been described in mouse models of maternal inflammation, and in a 2016 <em>Science</em> paper, Choi and Huh found that a type of immune cells known as Th17 cells, and their effector molecule, called IL-17, are responsible for this effect in mice. IL-17 then interacts with receptors found on brain cells in the developing fetus, leading to irregularities that the researchers call “patches” in certain parts of the cortex.</p>
<p>In one of the new papers, the researchers set out to learn more about these patches and to determine if they were responsible for the behavioral abnormalities seen in those mice, which include repetitive behavior and impaired sociability.</p>
<p>The researchers found that the patches are most common in a part of the brain known as S1DZ. Part of the somatosensory cortex, this region is believed to be responsible for proprioception, or sensing where the body is in space. In these patches, populations of cells called interneurons, which express a protein called parvalbumin, are reduced. Interneurons are responsible for controlling the balance of excitation and inhibition in the brain, and the researchers found that the changes they found in the cortical patches were associated with overexcitement in S1DZ.</p>
<p>When the researchers restored normal levels of brain activity in this area, they were able to reverse the behavioral abnormalities. They were also able to induce the behaviors in otherwise normal mice by overstimulating neurons in S1DZ.</p>
<p>The researchers also discovered that S1DZ sends messages to two other brain regions: the temporal association area of the cortex and the striatum. When the researchers inhibited the neurons connected to the temporal association area, they were able to reverse the sociability deficits. When they inhibited the neurons connected to the striatum, they were able to halt the repetitive behaviors.</p>
<p><strong>Microbial factors</strong></p>
<p>In the second <em>Nature</em> paper, the researchers delved into some of the additional factors that influence whether or not a severe infection leads to autism. Not all mothers who experience severe infection end up having child with autism, and similarly not all the mice in the maternal inflammation model develop behavioral abnormalities.</p>
<p>“This suggests that inflammation during pregnancy is just one of the factors. It needs to work with additional factors to lead all the way to that outcome,” Choi says.</p>
<p>A key clue was that when immune systems in some of the pregnant mice were stimulated, they began producing IL-17 within a day. “Normally it takes three to five days, because IL-17 is produced by specialized immune cells and they require time to differentiate,” Huh says. “We thought that perhaps this cytokine is being produced not from differentiating immune cells, but rather from pre-existing immune cells.”</p>
<p>Previous studies in mice and humans have found populations of Th17 cells in the intestines of healthy individuals. These cells, which help to protect the host from harmful microbes, are thought to be produced after exposure to particular types of harmless bacteria that associate with the epithelium.</p>
<p>The researchers found that only the offspring of mice with one specific type of harmless bacteria, known as segmented filamentous bacteria, had behavioral abnormalities and cortical patches. When the researchers killed those bacteria with antibiotics, the mice produced normal offspring.</p>
<p>“This data strongly suggests that perhaps certain mothers who happen to carry these types of Th17 cell-inducing bacteria in their gut may be susceptible to this inflammation-induced condition,” Huh says.</p>
<p>Humans can also carry strains of gut bacteria known to drive production of Th17 cells, and the researchers plan to investigate whether the presence of these bacteria is associated with autism.</p>
<p>Sarah Gaffen, a professor of rheumatology and clinical immunology at the University of Pittsburgh, says the study clearly demonstrates the link between IL-17 and the neurological effects seen in the mice offspring. “It’s rare for things to fit into such a clear model, where you can identify a single molecule that does what you predicted,” says Gaffen, who was not involved in the study.</p>
<p>The research was funded by the Simons Foundation Autism Research Initiative, the Simons Center for the Social Brain at MIT, the Howard Hughes Medical Institute, Robert Buxton, the National Research Foundation of Korea, the Searle Scholars Program, a Pew Scholarship for Biomedical Sciences, the Kenneth Rainin Foundation, the National Institutes of Health, and the Hock E. Tan and K. Lisa Yang Center for Autism Research.</p>
Studies have shown that mothers who experience an infection severe enough to require hospitalization during pregnancy are at higher risk of having a child with autism.
Image: MIT NewsResearch, Autism, Brain and cognitive sciences, McGovern Institute, School of Science, Bacteria3Q: Anantha Chandrakasan on new MIT–IBM Watson AI Labhttp://news.mit.edu/2017/3q-anantha-chandrakasan-mit-ibm-watson-ai-lab-0907
Lab seeks to expand the boundaries of research on artificial intelligence.Thu, 07 Sep 2017 00:01:00 -0400David L. Chandler | MIT News Officehttp://news.mit.edu/2017/3q-anantha-chandrakasan-mit-ibm-watson-ai-lab-0907<p><em>MIT and IBM jointly </em><a href="http://news.mit.edu/2017/ibm-mit-joint-research-watson-artificial-intelligence-lab-0907"><em>announced today</em></a><em> a 10-year agreement to create the MIT–IBM Watson AI Lab, a new collaboration for research on the frontiers of artificial intelligence. Anantha Chandrakasan, the dean of MIT’s School of Engineering, who led MIT’s work in forging the agreement, sat down with </em>MIT News<em> to discuss the new lab.</em></p>
<p><strong>Q:</strong> What does the new collaboration make possible?</p>
<p><strong>A: </strong>AI is everywhere. It’s used in just about every domain you can think of and is central to diverse fields, from image and speech recognition, to machine learning for disease detection, to drug discovery, to financial modeling for global trade.</p>
<p>This new collaboration will bring together researchers working on the core algorithms and devices that make such applications possible, enabling the pursuit of jointly defined projects. We will focus on basic research and applications, but with new resources and colleagues and tremendous access to real-world data and computational power.</p>
<p>The project will support many different pursuits, from scholarship, to the licensing of technology, to the release of open-source material, to the creation of startups. We hope to use this new lab as a template for many other interactions with industry.</p>
<p>We’ll issue a call for proposals to all researchers at MIT soon; this new lab will hope to attract interest from all five schools. I’ll co-chair the lab alongside Dario Gil, IBM Research VP of AI and IBM Q, and Dario and I will name co-directors from MIT and IBM soon.</p>
<p><strong>Q:</strong> What are the key areas of research that this lab will focus on?</p>
<p><strong>A: </strong>The main areas of focus are AI algorithms, the application of AI to industries (such as biomedicine and cybersecurity), the physics of AI, and ways to use AI to advance shared prosperity.</p>
<p>The core AI theme will focus on not only advancing deep-learning algorithms and other approaches, but also the use of AI to understand and enhance human intelligence. One of the goals is to build machine learning and AI systems that excel at both narrow tasks and the human skills of discovery and explanation. In terms of applications, there are some particular targets we have in mind, including being able to detect cancer (e.g., by using AI with imaging in radiology to automatically detect breast cancer) well before we do now.</p>
<p>This new collaboration will also provide a framework for aggregating knowledge from different domains. For example, a method that we use for cancer detection might also be useful in detecting other diseases, or the tools we develop to enable this might end up being useful in a non-biomedical context.</p>
<p>The work on the physics of AI will include quantum computing and new kinds of materials, devices, and architectures that will support machine-learning hardware. This will require innovations not only in the way that we think about algorithms and systems, but also at the physical level of devices and materials at the nanoscale.</p>
<p>To that end, IBM will become a founding member of MIT.nano, our new nanotechnology research, fabrication, and imaging facility that is set to open in the summer of 2018.</p>
<p>Lastly, researchers will explore how AI can increase prosperity broadly. They will also develop approaches to mitigate data bias and to ensure that AI systems behave ethically when deployed.</p>
<p><strong>Q:</strong> What effect do you expect this collaboration will have on students and faculty here at MIT?</p>
<p><strong>A:</strong> Above all, the lab will foster collaboration. There will be new projects among MIT researchers and between MIT and IBM researchers. And because the collaboration will also provide more opportunities for students to be involved in advanced research, through programs such as MIT’s Undergraduate Research Opportunities Program (UROP), the benefits of this relationship will extend across campus.</p>
<p>In addition to the new work, we have a lot of ongoing research that we will be able to leverage. Investigators in the Computer Science and Artificial Intelligence Laboratory (CSAIL), the Department of Brain and Cognitive Sciences (BCS), the Media Lab, and the Institute for Data, Systems, and Society (IDSS) are all actively working in AI already. The new lab presents an opportunity to bring them closer together.</p>
<p>This collaboration is positioned to help the creation of startups, in connection with The Engine, MIT’s new venture firm. We also anticipate making connections between the lab and on-campus innovation programs such as the Deshpande Center for Technological Innovation, MIT Sandbox, and the Martin Trust Center for MIT Entrepreneurship, spurring broader commercialization opportunities. This could ultimately help the creation of jobs in the Boston area and support a very strong AI ecosystem, both locally and nationally.</p>
Anantha Chandrakasan, dean of MIT’s School of Engineering, who negotiated the collaboration on behalf of MIT, speaks during the signing of the agreement with representatives of IBM.Photo: Jake BelcherArtificial intelligence, Machine learning, Algorithms, Computer science and technology, Research, Collaboration, School of Engineering, Cambridge, Boston and region, Industry, Computer Science and Artificial Intelligence Laboratory (CSAIL), Media Lab, IDSS, Brain and cognitive sciences, 3 QuestionsIBM and MIT to pursue joint research in artificial intelligence, establish new MIT–IBM Watson AI Labhttp://news.mit.edu/2017/ibm-mit-joint-research-watson-artificial-intelligence-lab-0907
IBM plans to make a 10-year, $240 million investment in new lab with MIT to advance AI hardware, software, and algorithms.Thu, 07 Sep 2017 00:01:00 -0400MIT News Officehttp://news.mit.edu/2017/ibm-mit-joint-research-watson-artificial-intelligence-lab-0907<p>IBM and MIT today announced that IBM plans to make a 10-year, $240 million investment to create the MIT–IBM Watson AI Lab in partnership with MIT. The lab will carry out fundamental artificial intelligence (AI) research and seek to propel scientific breakthroughs that unlock the potential of AI. The collaboration aims to advance AI hardware, software, and algorithms related to deep learning and other areas; increase AI’s impact on industries, such as health care and cybersecurity; and explore the economic and ethical implications of AI on society. IBM’s $240 million investment in the lab will support research by IBM and MIT scientists.</p>
<p>The <a href="http://mitibmwatsonailab.mit.edu/">new lab</a> will be one of the largest long-term university-industry AI collaborations to date, mobilizing the talent of more than 100 AI scientists, professors, and students to pursue joint research at IBM's Research Lab in Cambridge, Massachusetts — co-located with the IBM Watson Health and IBM Security headquarters in Kendall Square — and on the neighboring MIT campus.</p>
<p>The lab will be co-chaired by Dario Gil, IBM Research VP of AI and IBM Q, and Anantha P. Chandrakasan, dean of MIT’s School of Engineering. (Read a <a href="http://news.mit.edu/2017/3q-anantha-chandrakasan-mit-ibm-watson-ai-lab-0907">related Q&amp;A</a> with Chandrakasan.) IBM and MIT plan to issue a call for proposals to MIT researchers and IBM scientists to submit their ideas for joint research to push the boundaries in AI science and technology in several areas, including:</p>
<ul>
<li><strong>AI algorithms:</strong> Developing advanced algorithms to expand capabilities in machine learning and reasoning. Researchers will create AI systems that move beyond specialized tasks to tackle more complex problems and benefit from robust, continuous learning. Researchers will invent new algorithms that can not only leverage big data when available, but also learn from limited data to augment human intelligence.</li>
</ul>
<ul>
<li><strong>Physics of AI: </strong>Investigating new AI hardware materials, devices, and architectures that will support future analog computational approaches to AI model training and deployment, as well as the intersection of quantum computing and machine learning. The latter involves using AI to help characterize and improve quantum devices, and researching the use of quantum computing to optimize and speed up machine-learning algorithms and other AI applications.</li>
</ul>
<ul>
<li><strong>Application of AI to industries: </strong>Given its location in IBM Watson Health and IBM Security headquarters in Kendall Square, a global hub of biomedical innovation, the lab will develop new applications of AI for professional use, including fields such as health care and cybersecurity. The collaboration will explore the use of AI in areas such as the security and privacy of medical data, personalization of health care, image analysis, and the optimum treatment paths for specific patients.</li>
</ul>
<ul>
<li><strong>Advancing shared prosperity through AI</strong>: The MIT–IBM Watson AI Lab will explore how AI can deliver economic and societal benefits to a broader range of people, nations, and enterprises. The lab will study the economic implications of AI and investigate how AI can improve prosperity and help individuals achieve more in their lives.</li>
</ul>
<p>In addition to IBM’s plan to produce innovations that advance the frontiers of AI, a distinct objective of the new lab is to encourage MIT faculty and students to launch companies that will focus on commercializing AI inventions and technologies that are developed at the lab. The lab’s scientists also will publish their work, contribute to the release of open source material, and foster an adherence to the ethical application of AI.</p>
<p>“The field of artificial intelligence has experienced incredible growth and progress over the past decade. Yet today’s AI systems, as remarkable as they are, will require new innovations to tackle increasingly difficult real-world problems to improve our work and lives,” says John Kelly III, IBM senior vice president, Cognitive Solutions and Research. “The extremely broad and deep technical capabilities and talent at MIT and IBM are unmatched, and will lead the field of AI for at least the next decade.”</p>
<p>“I am&nbsp;delighted&nbsp;by this new collaboration,” MIT President L. Rafael Reif says. “True breakthroughs are often the result of fresh thinking inspired by new kinds of research teams. The combined MIT and IBM talent dedicated to this new effort will bring&nbsp;formidable power to a field with staggering potential to&nbsp;advance knowledge and help solve important challenges.”</p>
<p>Both MIT and IBM have been pioneers in artificial intelligence research, and the new AI lab builds on a decades-long research relationship between the two. In 2016, IBM Research announced a multiyear collaboration with MIT’s Department of Brain and Cognitive Sciences to advance the scientific field of machine vision, a core aspect of artificial intelligence. The collaboration has brought together leading brain, cognitive, and computer scientists to conduct research in the field of unsupervised machine understanding of audio-visual streams of data, using insights from next-generation models of the brain to inform advances in machine vision. In addition, IBM and the Broad Institute of MIT and Harvard have established a five-year, $50 million research collaboration on AI and genomics.</p>
<p>MIT researchers were among those who helped coin and popularize the very phrase “artificial intelligence” in the 1950s. MIT pushed several major advances in the subsequent decades, from neural networks to data encryption to quantum computing to crowdsourcing. Marvin Minsky, a founder of the discipline, collaborated on building the first artificial neural network and he, along with Seymour Papert, advanced learning algorithms. Currently, the Computer Science and Artificial Intelligence Laboratory, the Media Lab, the Department of Brain and Cognitive Sciences, the Center for Brains, Minds and Machines, and the MIT Institute for Data, Systems, and Society serve as connected hubs for AI and related research at MIT.</p>
<p>For more than 20 years, IBM has explored the application of AI across many areas and industries. IBM researchers invented and built Watson, which is a cloud-based AI platform being used by businesses, developers, and universities to fight cancer, improve classroom learning, minimize pollution, enhance agriculture and oil and gas exploration, better manage financial investments, and much more. Today, IBM scientists across the globe are working on fundamental advances in AI algorithms, science and technology that will pave the way for the next generation of artificially intelligent systems.</p>
<p>For information about employment opportunities with IBM at the new AI Lab, please visit MITIBMWatsonAILab.mit.edu.</p>
MIT President L. Rafael Reif, left, and John Kelly III, IBM senior vice president, Cognitive Solutions and Research, shake hands at the conclusion of a signing ceremony establishing the new MIT–IBM Watson AI Lab.
Photo: Jake BelcherArtificial intelligence, Machine learning, Algorithms, Computer science and technology, Research, Collaboration, School of Engineering, Cambridge, Boston and region, Industry, Computer Science and Artificial Intelligence Laboratory (CSAIL), Media Lab, IDSS, Brain and cognitive sciences, President L. Rafael ReifRobotic system monitors specific neuronshttp://news.mit.edu/2017/robotic-system-monitors-specific-neurons-0830
Success rate is comparable to that of highly trained scientists performing the process manually.Wed, 30 Aug 2017 12:00:00 -0400Anne Trafton | MIT News Officehttp://news.mit.edu/2017/robotic-system-monitors-specific-neurons-0830<p>Recording electrical signals from inside a neuron in the living brain can reveal a great deal of information about that neuron’s function and how it coordinates with other cells in the brain. However, performing this kind of recording is extremely difficult, so only a handful of neuroscience labs around the world do it.</p>
<p>To make this technique more widely available, MIT engineers have now devised a way to automate the process, using a computer algorithm that analyzes microscope images and guides a robotic arm to the target cell.</p>
<p>This technology could allow more scientists to study single neurons and learn how they interact with other cells to enable cognition, sensory perception, and other brain functions. Researchers could also use it to learn more about how neural circuits are affected by brain disorders.</p>
<p>“Knowing how neurons communicate is fundamental to basic and clinical neuroscience. Our hope is this technology will allow you to look at what’s happening inside a cell, in terms of neural computation, or in a disease state,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, and a member of MIT’s Media Lab and McGovern Institute for Brain Research.</p>
<p>Boyden is the senior author of the paper, which appears in the Aug. 30 issue of <em>Neuron</em>. The paper’s lead author is MIT graduate student Ho-Jun Suk.</p>
<p><strong>Precision guidance</strong></p>
<p>For more than 30 years, neuroscientists have been using a technique known as patch clamping to record the electrical activity of cells. This method, which involves bringing a tiny, hollow glass pipette in contact with the cell membrane of a neuron, then opening up a small pore in the membrane, usually takes a graduate student or postdoc several months to learn. Learning to perform this on neurons in the living mammalian brain is even more difficult.</p>
<p>There are two types of patch clamping: a “blind” (not image-guided) method, which is limited because researchers cannot see where the cells are and can only record from whatever cell the pipette encounters first, and an image-guided version that allows a specific cell to be targeted.</p>
<p>Five years ago, Boyden and colleagues at MIT and Georgia Tech, including co-author Craig Forest, devised a way to automate the blind version of patch clamping. They created a computer algorithm that could guide the pipette to a cell based on measurements of a property called electrical impedance — which reflects how difficult it is for electricity to flow out of the pipette. If there are no cells around, electricity flows and impedance is low. When the tip hits a cell, electricity can’t flow as well and impedance goes up.</p>
<p>Once the pipette detects a cell, it can stop moving instantly, preventing it from poking through the membrane. A vacuum pump then applies suction to form a seal with the cell’s membrane. Then, the electrode can break through the membrane to record the cell’s internal electrical activity.</p>
<p>The researchers achieved very high accuracy using this technique, but it still could not be used to target a specific cell. For most studies, neuroscientists have a particular cell type they would like to learn about, Boyden says.</p>
<p>“It might be a cell that is compromised in autism, or is altered in schizophrenia, or a cell that is active when a memory is stored. That’s the cell that you want to know about,” he says. “You don’t want to patch a thousand cells until you find the one that is interesting.”</p>
<p>To enable this kind of precise targeting, the researchers set out to automate image-guided patch clamping. This technique is difficult to perform manually because, although the scientist can see the target neuron and the pipette through a microscope, he or she must compensate for the fact that nearby cells will move as the pipette enters the brain.</p>
<p>“It’s almost like trying to hit a moving target inside the brain, which is a delicate tissue,” Suk says. “For machines it’s easier because they can keep track of where the cell is, they can automatically move the focus of the microscope, and they can automatically move the pipette.”</p>
<p>By combining several imaging processing techniques, the researchers came up with an algorithm that guides the pipette to within about 25 microns of the target cell. At that point, the system begins to rely on a combination of imagery and impedance, which is more accurate at detecting contact between the pipette and the target cell than either signal alone.</p>
<p>The researchers imaged the cells with two-photon microscopy, a commonly used technique that uses a pulsed laser to send infrared light into the brain, lighting up cells that have been engineered to express a fluorescent protein.</p>
<p>Using this automated approach, the researchers were able to successfully target and record from two types of cells — a class of interneurons, which relay messages between other neurons, and a set of excitatory neurons known as pyramidal cells. They achieved a success rate of about 20 percent, which is comparable to the performance of highly trained scientists performing the process manually.</p>
<p><strong>Unraveling circuits</strong></p>
<p>This technology paves the way for in-depth studies of the behavior of specific neurons, which could shed light on both their normal functions and how they go awry in diseases such as Alzheimer’s or schizophrenia. For example, the interneurons that the researchers studied in this paper have been previously linked with Alzheimer’s. In a recent study of mice, led by Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory, and conducted in collaboration with Boyden, it was reported that inducing a specific frequency of brain wave oscillation in interneurons in the hippocampus could help to clear amyloid plaques similar to those found in Alzheimer’s patients.</p>
<p>“You really would love to know what’s happening in those cells,” Boyden says. “Are they signaling to specific downstream cells, which then contribute to the therapeutic result? The brain is a circuit, and to understand how a circuit works, you have to be able to monitor the components of the circuit while they are in action.”</p>
<p>This technique could also enable studies of fundamental questions in neuroscience, such as how individual neurons interact with each other as the brain makes a decision or recalls a memory.</p>
<p>Bernardo Sabatini, a professor of neurobiology at Harvard Medical School, says he is interested in adapting this technique to use in his lab, where students spend a great deal of time recording electrical activity from neurons growing in a lab dish.</p>
<p>“It’s silly to have amazingly intelligent students doing tedious tasks that could be done by robots,” says Sabatini, who was not involved in this study. “I would be happy to have robots do more of the experimentation so we can focus on the design and interpretation of the experiments.”</p>
<p>To help other labs adopt the new technology, the researchers plan to put the details of their approach on their web site, <a href="http://autopatcher.org/">autopatcher.org</a>.</p>
<p>Other co-authors include Ingrid van Welie, Suhasa Kodandaramaiah, and Brian Allen. The research was funded by Jeremy and Joyce Wertheimer, the National Institutes of Health (including the NIH Single Cell Initiative and the NIH Director’s Pioneer Award), the HHMI-Simons Faculty Scholars Program, and the New York Stem Cell Foundation-Robertson Award.</p>
MIT engineers have devised a way to automate the process of monitoring neurons in a living brain using a computer algorithm that analyzes microscope images and guides a robotic arm to the target cell. In this image, a pipette guided by a robotic arm approaches a neuron identified with a fluorescent stain.
Image: Ho-Jun SukResearch, Brain and cognitive sciences, Biological engineering, Neuroscience, Memory, Alzheimer's, Robotics, Robots, McGovern Institute, Media Lab, School of Science, School of Engineering, School of Architecture and Planning, National Institutes of Health (NIH)How we recall the pasthttp://news.mit.edu/2017/neuroscientists-discover-brain-circuit-retrieving-memories-0817
Neuroscientists discover a brain circuit dedicated to retrieving memories.Thu, 17 Aug 2017 11:59:59 -0400Anne Trafton | MIT News Officehttp://news.mit.edu/2017/neuroscientists-discover-brain-circuit-retrieving-memories-0817<p>When we have a new experience, the memory of that event is stored in a neural circuit that connects several parts of the hippocampus and other brain structures. Each cluster of neurons may store different aspects of the memory, such as the location where the event occurred or the emotions associated with it.</p>
<p>Neuroscientists who study memory have long believed that when we recall these memories, our brains turn on the same hippocampal circuit that was activated when the memory was originally formed. However, MIT neuroscientists have now shown, for the first time, that recalling a memory requires a “detour” circuit that branches off from the original memory circuit.</p>
<p>“This study addresses one of the most fundamental questions in brain research — namely how episodic memories are formed and retrieved — and provides evidence for an unexpected answer: differential circuits for retrieval and formation,” says Susumu Tonegawa, the Picower Professor of Biology and Neuroscience, the director of the RIKEN-MIT Center for Neural Circuit Genetics at the Picower Institute for Learning and Memory, and the study’s senior author.</p>
<p>This distinct recall circuit has never been seen before in a vertebrate animal, although a study published last year found a similar recall circuit in the worm <em>Caenorhabditis elegans</em>.</p>
<p>Dheeraj Roy, a recent MIT PhD recipient, and research scientist Takashi Kitamura are the lead authors of the paper, which appears in the Aug. 17 online edition of <em>Cell</em>. Other MIT authors are postdocs Teruhiro Okuyama and Sachie Ogawa, and graduate student Chen Sun. Yuichi Obata and Atsushi Yoshiki of the RIKEN Brain Science Institute are also authors of the paper.</p>
<p><strong>Parts unknown</strong></p>
<p>The hippocampus is divided into several regions with different memory-related functions —&nbsp;most of which have been well-explored, but a small area called the subiculum has been little-studied. Tonegawa’s lab set out to investigate this region using mice that were genetically engineered so that their subiculum neurons could be turned on or off using light.</p>
<p>The researchers used this approach to control memory cells during a fear-conditioning event — that is, a mild electric shock delivered when the mouse is in a particular chamber.</p>
<p>Previous research has shown that encoding these memories involves cells in a part of the hippocampus called CA1, which then relays information to another brain structure called the entorhinal cortex. In each location, small subsets of neurons are activated, forming memory traces known as engrams.</p>
<p>“It’s been thought that the circuits which are involved in forming engrams are the same as the circuits involved in the re-activation of these cells that occurs during the recall process,” Tonegawa says.</p>
<p>However, scientists had previously identified anatomical connections that detour from CA1 through the subiculum, which then connects to the entorhinal cortex. The function of this circuit, and of the subiculum in general, was unknown.</p>
<p>In one group of mice, the MIT team inhibited neurons of the subiculum as the mice underwent fear conditioning, which had no effect on their ability to later recall the experience. However, in another group, they inhibited subiculum neurons after fear conditioning had occurred, when the mice were placed back in the original chamber. These mice did not show the usual fear response, demonstrating that their ability to recall the memory was impaired.</p>
<p>This provides evidence that the detour circuit involving the subiculum is necessary for memory recall but not for memory formation. Other experiments revealed that the direct circuit from CA1 to the entorhinal cortex is not necessary for memory recall, but is required for memory formation.</p>
<p>“Initially, we did not expect the outcome would come out this way,” Tonegawa says. “We just planned to explore what the function of the subiculum could be.”</p>
<p>“This paper is a tour de force of advanced neuroscience techniques, with an intriguing core result showing the existence and importance of different pathways for formation and retrieval of hippocampus-dependent memories,” says Karl Deisseroth, a professor of bioengineering and psychiatry and behavioral sciences at Stanford University, who was not involved in the study.</p>
<p><strong>Editing memories</strong></p>
<p>Why would the hippocampus need two distinct circuits for memory formation and recall? The researchers found evidence for two possible explanations. One is that interactions of the two circuits make it easier to edit or update memories. As the recall circuit is activated, simultaneous activation of the memory formation circuit allows new information to be added.</p>
<p>“We think that having these circuits in parallel helps the animal first recall the memory, and when needed, encode new information,” Roy says. “It’s very common when you remember a previous experience, if there’s something new to add, to incorporate the new information into the existing memory.”</p>
<p>Another possible function of the detour circuit is to help stimulate longer-term stress responses. The researchers found that the subiculum connects to a pair of structures in the hypothalamus known as the mammillary bodies, which stimulates the release of stress hormones called corticosteroids. That takes place at least an hour after the fearful memory is recalled.</p>
<p>While the researchers identified the two-circuit system in experiments involving memories with an emotional component (both positive and negative), the system is likely involved in any kind of episodic memory, the researchers say.</p>
<p>The findings also suggest an intriguing possibility related to Alzheimer’s disease, according to the researchers. Last year, Roy and others in Tonegawa’s lab found that mice with a version of early-stage Alzheimer’s disease have trouble recalling memories but are still able to form new memories. The new study suggests that this subiculum circuit may be affected in Alzheimer’s disease, although the researchers have not studied this.</p>
<p>The research was funded by the RIKEN Brain Science Institute, the Howard Hughes Medical Institute, and the JPB Foundation.</p>
MIT neuroscientists have shown, for the first time, that recalling a memory requires a “detour” circuit that branches off from the original memory circuit. This low-magnification image shows that hippocampal CA1 neurons (red) and dorsal subiculum neurons (green) can be genetically identified using two different protein markers. Image: Dheeraj Roy/Tonegawa Lab, MITResearch, Brain and cognitive sciences, Picower Institute, School of Science, Neuroscience, Memory