MIT News - Media Labhttp://news.mit.edu/rss/topic/media-lab
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.enWed, 07 Dec 2016 13:00:00 -0500Unique visual stimulation may be new treatment for Alzheimer’s http://news.mit.edu/2016/visual-stimulation-treatment-alzheimer-1207
Noninvasive technique reduces beta amyloid plaques in mouse models of Alzheimer’s disease.Wed, 07 Dec 2016 13:00:00 -0500Anne Trafton | MIT News Officehttp://news.mit.edu/2016/visual-stimulation-treatment-alzheimer-1207<p>Using LED lights flickering at a specific frequency, MIT researchers have shown that they can substantially reduce the beta amyloid plaques seen in Alzheimer’s disease, in the visual cortex of mice.</p>
<p>This treatment appears to work by inducing brain waves known as gamma oscillations, which the researchers discovered help the brain suppress beta amyloid production and invigorate cells responsible for destroying the plaques.</p>
<p>Further research will be needed to determine if a similar approach could help Alzheimer’s patients, says Li-Huei Tsai, the Picower Professor of Neuroscience, director of MIT’s Picower Institute for Learning and Memory, and senior author of the study, which appears in the Dec. 7 online edition of <em>Nature</em>.</p>
<p>“It’s a big ‘if,’ because so many things have been shown to work in mice, only to fail in humans,” Tsai says. “But if humans behave similarly to mice in response to this treatment, I would say the potential is just enormous, because it’s so noninvasive, and it’s so accessible.”</p>
<p>Tsai and Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at the MIT Media Lab and the McGovern Institute for Brain Research, who is also an author of the <em>Nature</em> paper, have started a company called Cognito Therapeutics to pursue tests in humans. The paper’s lead authors are graduate student Hannah Iaccarino and Media Lab research affiliate Annabelle Singer.</p>
<p>“This important announcement may herald a breakthrough in the understanding and treatment of Alzheimer's disease, a terrible affliction affecting millions of people and their families around the world,” says Michael Sipser, dean of MIT’s School of Science.&nbsp;“Our MIT scientists have opened the door to an entirely new direction of research on this brain disorder and the mechanisms that may cause or prevent it.&nbsp;I find it extremely exciting.”</p>
<div class="cms-placeholder-content-video"></div>
<p><strong>Brain wave stimulation</strong></p>
<p>Alzheimer’s disease, which affects more than 5 million people in the United States, is characterized by beta amyloid plaques that are suspected to be harmful to brain cells and to interfere with normal brain function. Previous studies have hinted that Alzheimer’s patients also have impaired gamma oscillations. These brain waves, which range from 25 to 80 hertz (cycles per second), are believed to contribute to normal brain functions such as attention, perception, and memory.</p>
<p>In a study of mice that were genetically programmed to develop Alzheimer’s but did not yet show any plaque accumulation or behavioral symptoms, Tsai and her colleagues found impaired gamma oscillations during patterns of activity that are essential for learning and memory while running a maze.</p>
<p>Next, the researchers stimulated gamma oscillations at 40 hertz in a brain region called the hippocampus, which is critical in memory formation and retrieval. These initial studies relied on a technique known as optogenetics, co-pioneered by Boyden, which allows scientists to control the activity of genetically modified neurons by shining light on them. Using this approach, the researchers stimulated certain brain cells known as interneurons, which then synchronize the gamma activity of excitatory neurons.</p>
<p>After an hour of stimulation at 40 hertz, the researchers found a 40 to 50 percent reduction in the levels of beta amyloid proteins in the hippocampus. Stimulation at other frequencies, ranging from 20 to 80 hertz, did not produce this decline.</p>
<p>Tsai and colleagues then began to wonder if less-invasive techniques might achieve the same effect. Tsai and Emery Brown, the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience, a member of the Picower Institute, and an author of the paper, came up with the idea of using an external stimulus — in this case, light — to drive gamma oscillations in the brain. The researchers built a simple device consisting of a strip of LEDs that can be programmed to flicker at different frequencies.</p>
<p>Using this device, the researchers found that an hour of exposure to light flickering at 40 hertz enhanced gamma oscillations and reduced beta amyloid levels by half in the visual cortex of mice in the very early stages of Alzheimer’s. However, the proteins returned to their original levels within 24 hours.</p>
<p>The researchers then investigated whether a longer course of treatment could reduce amyloid plaques in mice with more advanced accumulation of amyloid plaques. After treating the mice for an hour a day for seven days, both plaques and free-floating amyloid were markedly reduced. The researchers are now trying to determine how long these effects last.</p>
<p>Furthermore, the researchers found that gamma rhythms also reduced another hallmark of Alzheimer’s disease: the abnormally modified Tau protein, which can form tangles in the brain.</p>
<p>“What this study does, in a very carefully designed and well-executed way, is show that gamma oscillations, which we have known for a long time are linked to cognitive function, play a critical role in the capacity of the brain to clean up deposits,” says Alvaro Pascual-Leone, a professor of neurology at Harvard Medical School who was not involved in the research. “That’s remarkable and surprising, and it opens up the exciting prospect of possible translation to application in humans.”</p>
<p>Tsai’s lab is now studying whether light can drive gamma oscillations in brain regions beyond the visual cortex, and preliminary data suggest that this is possible. They are also investigating whether the reduction in amyloid plaques has any effects on the behavioral symptoms of their Alzheimer’s mouse models, and whether this technique could affect other neurological disorders that involve impaired gamma oscillations.</p>
<p><strong>Two modes of action</strong></p>
<p>The researchers also performed studies to try to figure out how gamma oscillations exert their effects. They found that after gamma stimulation, the process for beta amyloid generation is less active. Gamma oscillations also improved the brain’s ability to clear out beta amyloid proteins, which is normally the job of immune cells known as microglia.</p>
<p>“They take up toxic materials and cell debris, clean up the environment, and keep neurons healthy,” Tsai says.</p>
<p>In Alzheimer’s patients, microglia cells become very inflammatory and secrete toxic chemicals that make other brain cells more sick. However, when gamma oscillations were boosted in mice, their microglia underwent morphological changes and became more active in clearing away the beta amyloid proteins.</p>
<p>“The bottom line is, enhancing gamma oscillations in the brain can do at least two things to reduced amyloid load. One is to reduce beta amyloid production from neurons. And second is to enhance the clearance of amyloids by microglia,” Tsai says.</p>
<p>The researchers also sequenced messenger RNA from the brains of the treated mice and found that hundreds of genes were over- or underexpressed, and they are now investigating the possible impact of those variations on Alzheimer’s disease.</p>
<p>The research was funded by the JPB Foundation, the Cameron Hayden Lord Foundation, a Barbara J. Weedon Fellowship, the New York Stem Cell Foundation Robertson Award, the National Institutes of Health, the Belfer Neurodegeneration Consortium, and the Halis Family Foundation.</p>
“…[I]f humans behave similarly to mice in response to this treatment, I would say the potential is just enormous, because it’s so noninvasive, and it’s so accessible,” says Li-Huei Tsai, the Picower Professor of Neuroscience, when describing a new treatment for Alzheimer’s disease.
Photo: Bryce VickmarkResearch, Alzheimer's, Brain and cognitive sciences, Biological engineering, Picower Institute, McGovern Institute, Media Lab, IMES, School of Science, School of Engineering, School of Architecture and Planning, NeuroscienceEnhancing education from pre-K to MIT and beyondhttp://news.mit.edu/2016/enhancing-education-from-pre-k-to-mit-and-beyond-1123
Two new initiatives take an in-depth look at learning.Wed, 23 Nov 2016 14:00:01 -0500Office of Digital Learninghttp://news.mit.edu/2016/enhancing-education-from-pre-k-to-mit-and-beyond-1123<p>To improve education — whether pK-12, college, professional training, or online courses — one must first gain an understanding of how people learn. Applying that learning on a large scale requires a forward-thinking focus on expanding the reach of high-quality education for learners of all ages, all across the globe.&nbsp;</p>
<p>These are the challenges that drive two Institute-wide initiatives announced by President L. Rafael Reif earlier this year: the <a href="http://mitili.mit.edu/" target="_blank">MIT Integrated Learning Initiative&nbsp;(MITili)</a> and the <a href="file:///M:\Communication\MIT%20News\pK-12%20Action%20Group" target="_blank">pK-12 Action Group</a>.&nbsp;</p>
<p>The integrated sciences of learning, now emerging as a significant field of research, is at the core of MITili (pronounced “mightily”). By applying scientific rigor to investigate the methods that lead to effective learning, MITili aims to enhance the educational experience at all perspectives — from improving education at MIT to inspiring lifelong learning online to advancing the Institute’s campaign to promote STEM understanding within elementary, middle, and high schools.</p>
<p>Fueled by MIT’s residential education and global online efforts, MITili pulls together resources from across campus to integrate faculty insights and foster rigorous quantitative and qualitative research in education. The initiative leverages expertise in cognitive psychology, neuroscience, economics, engineering, public policy, and other fields.&nbsp;</p>
<p>It is this cross-discipline thinking that led to the recent appointment of Parag Pathak, professor of economics and a founder of the School Effectiveness and Inequality Initiative (SEII), as MITili deputy director.&nbsp;Pathak, who has worked extensively with the Boston school system to make it easier to navigate school assignment systems and level the playing field for city families, will join MITili Director John Gabrieli, a professor in the Department of Brain and Cognitive Sciences, in guiding the group’s vision. Based on Pathak’s background, the new position is a natural fit.</p>
<p>“MIT is known for solving problems, so if we can improve how people learn then we can improve how much education they get,” Pathak explains. “Individuals who have more access to education not only learn more but live longer and are better citizens.”&nbsp;</p>
<p>Supported by two new staff members, Associate Director Jeff Dieffenbach and Program Coordinator Steve Nelson, Pathak and MITili are off and running on several projects, including continued exploration into Boston’s school assignments, an in-depth analysis of charter schools and their effectiveness for special education students, and an upcoming study on the impact of affirmative action policies in education. Says Pathak: “A lot of our work is very fresh and new. By taking a scientific perspective to solve problems, we are breaking free of the old way of thinking.”</p>
<p>The Office of Digital Learning has also established a separate, though related, initiative called the&nbsp;<a href="http://pk12.mit.edu/" target="_blank">pK-12 Action Group</a>, which enables a diverse MIT community to collaborate on STEM projects for pre-kindergarten through 12th grade students and teachers. By working together, MIT faculty, staff, and students amplify their impact on existing efforts — studies, classroom technologies, curriculum, teacher professional development — while driving new work and outreach, all with the goal of understanding how learning happens and transforming how students learn.&nbsp;</p>
<p>Professor Eric Klopfer, director of both the MIT Scheller Teacher Education Program and MIT Education Arcade, has been involved with the pK-12 Action Group since its early stages. Recently named co-chair of the pK-12 advisory group, Klopfer joins Professor Angela Belcher and provides breadth to the leadership team. Associate Director Claudia Urrea brings over 20&nbsp;years of experience in the field of education and technology. She works&nbsp;together&nbsp;with the faculty to coordinate direction and vision and to engage the larger&nbsp;pK-12 community at MIT. &nbsp;&nbsp;</p>
<p>“We come at this from different perspectives,” Klopfer says. “Angie is passionate about science and engineering and making them accessible to all, while I come from a more established learning and education focus. Both angles are important to tackle these global challenges and make a significant impact on pk-12 education. We’re thinking big.”</p>
<p>Collaboration with the community is key. For this reason, the effort is led by practicing educators, not administrators. And it’s why the work is already making a big difference, with the following initiatives:</p>
<ul>
<li><a href="https://clix.tiss.edu/">Connected Learning Initiative (CLIx)</a>, a cross-unit project with MIT’s Office of Digital Learning, gives thousands of young people from under-served communities in India an opportunity for quality education through the meaningful integration of technology;</li>
<li><a href="http://tsl.mit.edu/">Teaching Systems Lab (TSL)</a>, working in partnership with the Woodrow Wilson National Fellowship Foundation, examines what it takes to prepare new teachers for today’s classrooms and the systems needed to help these teachers transform learning through tomorrow’s learning environments; and</li>
<li>on-campus workshops, which leverage many existing pK-12 efforts at MIT, are designed to provide professional teacher development, advance STEM curricula, and explore new ways to enhance educational experiences.</li>
</ul>
<p>The goal of influencing how people around the world get educated is big, bold — and shared by both MITili and the pK-12 Action Group. But that doesn’t mean the goal is out of reach. As Pathak says: “It all starts with the science of learning.”</p>
Members of the MITili and pK-12 Action Group convene to discuss learning and research directions.Photo: Isaac ChuangOffice of Digital Learning, K-12 education, STEM education, education, Education, teaching, academics, Research, Brain and cognitive sciences, Economics, DMSE, Chemical engineering, Media Lab, Faculty, School of Science, SHASS, School of Engineering, School of Architecture and PlanningForging ahead on climate actionhttp://news.mit.edu/2016/forging-ahead-climate-action-1122
At UN Climate Change Conference, MIT researchers share insights on implementing climate commitments.Tue, 22 Nov 2016 00:00:00 -0500Emily Dahl | MIT Energy Initiativehttp://news.mit.edu/2016/forging-ahead-climate-action-1122<p>Last year, participants in the Paris Agreement on climate change expressed the shared global objective of limiting temperature rise, with each party to the agreement laying out its intended national contributions to addressing climate change. At this year’s UN Climate Change Conference (COP22) in Marrakech, Morocco, as the world wondered what a change in administration could mean for U.S. climate policy and — by extension — the momentum for the Paris Agreement, national and civil society leaders repeatedly expressed their commitment to upholding and advancing implementation of the agreement.</p>
<p>For MIT, the imperative is as clear as ever.</p>
<p>“The Paris Agreement motivated us immensely,” said Maria Zuber, MIT's vice president for research, at a series of conversations hosted by Emerson Collective in Marrakech. “MIT strongly supports the agreement. Collectively, on our campus, we said it is a great starting point — but it’s not enough,” she said.</p>
<p>Zuber spoke with Michael Crow, president of Arizona State University, and Dan Arvizu, Emerson Collective’s chief technology officer and STEM evangelist, on the role of academic research and innovation in meeting global greenhouse gas reduction targets. She went on to describe the Institute’s efforts to conduct research and develop partnerships that foster climate solutions.</p>
<p>These solutions include nature-focused approaches. “Nature-based solutions can play an important part in addressing climate change. Not only can we learn from how natural systems self-regulate, but we can apply that knowledge to designing new technologies and courses of action,” said John Fernández, director of MIT's Environmental Solutions Initiative and a professor of architecture. His initiative is currently exploring partnerships around nature-based climate solutions that protect ecosystems.</p>
<p>At the same event, MIT Media Lab Director Joi Ito discussed the importance of designing systems to solve for multiple problems — such as reducing carbon emissions while also improving quality of life and caring for the environment. “When you think of a complex system like the environment or a city, how do you design for everything in the system so that it’s optimized not just for the one player that has economic value, but for the entire system? That’s the kind of design we need to figure out how to do,” he said, adding: “The people participating day-to-day in the system can be the designers. It’s about bringing science directly into the community and having the community participate in the science.”</p>
<p>Robert Stoner, deputy director of the MIT Energy Initiative (MITEI), expanded on this idea in a breakout discussion on citizen science and education. “The democratization of data with the availability of low-cost measurement technology and access to the Internet creates new opportunities for nonscientists to participate in creating knowledge and using it to improve the world. But [it also creates] potential for that data to be misinterpreted or misused in civil discourse — underscoring the need for scientists to be involved ‘on the playing field’ as interpreters in an ethical and responsible manner,” said Stoner, who is also the director of the Tata Center for Technology and Design.</p>
<p><strong>Crowdsourcing climate solutions</strong></p>
<p>To empower individuals to contribute to climate solutions while employing scientific rigor, MIT’s Climate CoLab has developed a crowdsourcing platform for people around the world to collaborate on creating plans for addressing climate change.</p>
<p>At a COP22&nbsp;side event with Climate Interactive and the Abibimman Foundation, Climate CoLab project manager Laur Hesse Fisher described the online platform and contests, in which participants devise individual climate policies and actions, and integrated national and global plans. Scientific experts analyze and judge the proposals in terms of projects’ feasibility and potential impacts among other criteria. The winners use prize money to help scale their ideas.</p>
<p>Fisher encouraged audience members to submit proposals for a contest open through February with the United Nations Secretary-General’s Climate Resilience Initiative: <a href="http://climatecolab.org/contests/2017/A2R-Anticipating-Climate-Hazards" target="_blank">Anticipate, Absorb, Reshape (A2R)</a>. “We’re running a contest to get your ideas and your projects on how the most vulnerable countries can anticipate the climate hazards that they’re going to face,” she said. “We welcome you to submit your idea so that you can be part of this process.”&nbsp;</p>
<p>Fisher also spoke at an event with the Cities Climate Finance Leadership Alliance to showcase and discuss existing initiatives and practical examples of approaches intended to accelerate climate action at the urban level, and she held several other events to introduce people to Climate CoLab’s platform.</p>
<p>"Climate CoLab shows that new technologies can make new things possible, and that’s what we do at MIT,” said Fisher. “But it’s not only more efficient solar panels or carbon capture technologies — it’s also new ways that the world can work together.”</p>
<p><strong>Sharing interactive climate tools in Africa</strong></p>
<p>Ahead of and throughout COP22, John Sterman, a professor of management at the MIT Sloan School of Management, and Climate Interactive team members have worked to bring their <a href="https://www.climateinteractive.org/tools/" target="_blank">interactive climate policy models and tools</a> to Africa. They have conducted workshops on their jointly developed “<a href="https://www.climateinteractive.org/programs/world-climate/" target="_blank">World Climate</a>” role play throughout Africa and around the world — including sessions with Moroccan business leaders and university students, staff, and faculty. “We’re enabling local scholars, educators, and members of civil society to help their communities learn for themselves about the international climate negotiations, data modeling, and the urgency of emissions reductions for all nations,” said Sterman.</p>
<p>MIT and Climate Interactive have also created new tools to support “climate smart agriculture” in Africa, led by Climate Interactive’s Travis Franck SM ’05 PhD ’09, who is also an MIT research affiliate.</p>
<p>“Our prototype interactive system dynamics model considers how countries can meet two critical goals:&nbsp;expanding food production to support their growing populations and cutting the greenhouse emissions from the agricultural sector,” said Sterman. He and Franck shared this work in several side events at COP22.</p>
<p><strong>Analyzing nations’ climate progress and choices</strong></p>
<p>Graduate students Arun Singh and Michael Davidson came to Marrakech to advance their international climate research and keep abreast of real-time developments in climate policy.</p>
<p>Davidson, who first attended the international climate talks in 2010, researches China’s climate and energy policies related to renewable energy and the electric grid as a PhD candidate with the Institute for Data, Systems, and Society (IDSS) and a research associate with the Joint Program on the Science and Policy of Global Change. He arrived in Marrakech just before the U.S. election and witnessed uncertainties arising from the outcome globally and around U.S.-China relations, which had warmed leading up to the Paris Agreement last year, with jointly announced climate commitments that were&nbsp;seen as crucial to the&nbsp;adoption of the agreement.</p>
<p>“There are many reasons why it's in the best interests of the U.S. not to withdraw, but now, the big question is, if the U.S. does leave the agreement, who’s going to take up the mantle and drive the implementation process forward? There is a lot of interest in seeing China — but also EU and others — step forward, helping to fundamentally shape the agreement without U.S. input or interests at its&nbsp;center,”&nbsp;said Davidson. He is also examining how the agreement's provisions on tracking countries’ progress toward meeting collective climate goals will take shape, and is among those helping to ensure that it will include robust scientific assessments, working with advisor Valerie Karplus and Henry Jacoby, professors at the MIT Sloan School.</p>
<p>Singh, a master’s degree student with IDSS and a fellow with the Tata Center for Technology and Design, is developing an energy-economic model to help inform India’s climate policies and technology choices. He <a href="http://news.mit.edu/2016/arun-singh-inform-indias-climate-policy-choices-1116" target="_self">shared his research at a side event</a> and conducted interviews related to his work as a Tata Fellow and research associate of the Joint Program with advisors Karplus and principal research scientist Niven Winchester.</p>
<p>During COP22, the U.S., China, and Mexico announced their 2050 greenhouse gas emissions targets, with the U.S. and Canada each pledging to reduce emissions 80 percent from 2005 levels by 2050, and Mexico pledging to reduce emissions 50 percent from 2000 levels by 2050. The U.S. released its plan in a new report, <a href="https://www.whitehouse.gov/sites/default/files/docs/mid_century_strategy_report-final.pdf" target="_blank">the United States Mid-Century Strategy for Deep Decarbonization</a>, which cited research by Jessika Trancik, an associate professor of energy studies with IDSS, on the “virtuous cycle” of continued clean energy technology development and deployment “in which ambition drives down costs, in turn eliciting greater ambition.”</p>
<p>In an <a href="https://www.climateinteractive.org/analysis/release-our-analysis-us-mexico-and-canada-set-2050-climate-goals/" target="_blank">analysis of the three nations’ plans</a>, Sterman said, “Our relentlessly shrinking carbon budget means all nations of the world must offer earlier and deeper cuts than they pledged in Paris, and continue to cut emissions through the end of the century. These midcentury strategies should inspire other nations to be even more ambitious. Warming cannot be limited to ‘well below’ 2 C without stronger midcentury commitments from all other nations.”</p>
<p><strong>Committing to continued action</strong></p>
<p>Speaking with news network France24, Sterman reflected on the overarching sentiments at COP22 in the wake of the U.S. election: “The agenda has changed, but what is interesting is that a large number of the parties — the nations here — are asserting that they will continue to reduce their emissions regardless of what the United States may or may not do under the new administration,” he said. “And the civil society groups that are here, representing every aspect of society in the United States and around the world, are committed to redoubling their efforts to build grassroots support for climate action at the community, municipal, and state level.”</p>
<p>At MIT, across the Institute, community members are prepared to keep accelerating climate action in keeping with the <a href="http://news.mit.edu/2015/new-climate-change-strategy-1021" target="_self">Plan for Action on Climate Change</a>.</p>
<p>As Zuber said at the Emerson Collective event, “We can’t just talk about this. We have to lead by example.”</p>
Left to right: Robert Stoner, deputy director of the MIT Energy Initiative and director of the Tata Center for Technology and Design; Maria Zuber, MIT vice president for research; Joi Ito, director of the MIT Media Lab; and John Fernandez, professor of architecture and director of MIT's Environmental Solutions Initiative.Photo: Emily Dahl/MIT Energy InitiativeResearch, Climate change, Climate, Alternative energy, Policy, Energy, Economics, Global Warming, Greenhouse gases, MIT Energy Initiative (MITEI), Renewable energy, Graduate, postdoctoral, Tata Center, IDSS, School of Engineering, School of Architecture and Planning, Sloan School of Management, EAPS, School of ScienceResearchers create synthetic cells to isolate genetic circuitshttp://news.mit.edu/2016/synthetic-cells-isolate-genetic-circuits-1114
Encapsulating molecular components in artificial membranes offers more flexibility in designing circuits.Mon, 14 Nov 2016 11:00:00 -0500Anne Trafton | MIT News Officehttp://news.mit.edu/2016/synthetic-cells-isolate-genetic-circuits-1114<p>Synthetic biology allows scientists to design genetic circuits that can be placed in cells, giving them new functions such as producing drugs or other useful molecules. However, as these circuits become more complex, the genetic components can interfere with each other, making it difficult to achieve more complicated functions.&nbsp;</p>
<p>MIT researchers have now demonstrated that these circuits can be isolated within individual synthetic “cells,” preventing them from disrupting each other. The researchers can also control communication between these cells, allowing for circuits or their products to be combined at specific times.</p>
<p>“It’s a way of having the power of multicomponent genetic cascades, along with the ability to build walls between them so they won’t have cross-talk. They won’t interfere with each other in the way they would if they were all put into a single cell or into a beaker,” says Edward Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT. Boyden is also a member of MIT’s Media Lab and McGovern Institute for Brain Research, and an HHMI-Simons Faculty Scholar.</p>
<p>This approach could allow researchers to design circuits that manufacture complex products or act as sensors that respond to changes in their environment, among other applications.</p>
<p>Boyden is the senior author of a paper describing this technique in the Nov. 14 issue of <em>Nature Chemistry</em>. The paper’s lead authors are former MIT postdoc Kate Adamala, who is now an assistant professor at the University of Minnesota, and former MIT grad student Daniel Martin-Alarcon. Katriona Guthrie-Honea, a former MIT research assistant, is also an author of the paper.</p>
<p><strong>Circuit control</strong></p>
<p>The MIT team encapsulated their genetic circuits in droplets known as liposomes, which have a fatty membrane similar to cell membranes. These synthetic cells are not alive but are equipped with much of the cellular machinery necessary to read DNA and manufacture proteins.</p>
<p>By segregating circuits within their own liposomes, the researchers are able to create separate circuit subroutines that could not run in the same container at the same time, but can run in parallel to each other, communicating in controlled ways. This approach also allows scientists to repurpose the same genetic tools, including genes and transcription factors (proteins that turn genes on or off), to do different tasks within a network.</p>
<p>“If you separate circuits into two different liposomes, you could have one tool doing one job in one liposome, and the same tool doing a different job in the other liposome,” Martin-Alarcon says. “It expands the number of things that you can do with the same building blocks.”</p>
<p>This approach also enables communication between circuits from different types of organisms, such as bacteria and mammals.</p>
<p>As a demonstration, the researchers created a circuit that uses bacterial genetic parts to respond to a molecule known as theophylline, a drug similar to caffeine. When this molecule is present, it triggers another molecule known as doxycycline to leave the liposome and enter another set of liposomes containing a mammalian genetic circuit. In those liposomes, doxycycline activates a genetic cascade that produces luciferase, a protein that generates light.</p>
<p>Using a modified version of this approach, scientists could create circuits that work together to produce biological therapeutics such as antibodies, after sensing a particular molecule emitted by a brain cell or other cell.</p>
<p>“If you think of the bacterial circuit as encoding a computer program, and the mammalian circuit is encoding the factory, you could combine the computer code of the bacterial circuit and the factory of the mammalian circuit into a unique hybrid system,” Boyden says.</p>
<p>The researchers also designed liposomes that can fuse with each other in a controlled way. To do that, they programmed the cells with proteins called SNAREs, which insert themselves into the cell membrane. There, they bind to corresponding SNAREs found on surfaces of other liposomes, causing the synthetic cells to fuse. The timing of this fusion can be controlled to bring together liposomes that produce different molecules. When the cells fuse, these molecules are combined to generate a final product.</p>
<p><strong>More modularity</strong></p>
<p>The researchers believe this approach could be used for nearly any application that synthetic biologists are already working on. It could also allow scientists to pursue potentially useful applications that have been tried before but abandoned because the genetic circuits interfered with each other too much.</p>
<p>“The way that we wrote this paper was not oriented toward just one application,” Boyden says. “The basic question is: Can you make these circuits more modular? If you have everything mishmashed together in the cell, but you find out that the circuits are incompatible or toxic, then putting walls between those reactions and giving them the ability to communicate with each other could be very useful.”</p>
<p>Vincent Noireaux, an associate professor of physics at the University of Minnesota, described the MIT approach as “a rather novel method to learn how biological systems work.”</p>
<p>“Using cell-free expression has several advantages: Technically the work is reduced to cloning (nowadays fast and easy), we can link information processing to biological function like living cells do, and we work in isolation with no other gene expression occurring in the background,” says Noireaux, who was not involved in the research.</p>
<p>Another possible application for this approach is to help scientists explore how the earliest cells may have evolved billions of years ago. By engineering simple circuits into liposomes, researchers could study how cells might have evolved the ability to sense their environment, respond to stimuli, and reproduce.</p>
<p>“This system can be used to model the behavior and properties of the earliest organisms on Earth, as well as help establish the physical boundaries of Earth-type life for the search of life elsewhere in the solar system and beyond,” Adamala says.</p>
MIT researchers have developed a way to isolate genetic circuits within individual synthetic “cells,” preventing the circuits from disrupting each other.Image: Nick FairResearch, Synthetic biology, Biological engineering, Brain and cognitive sciences, Media Lab, McGovern Institute, School of Engineering, School of Science, School of Architecture and PlanningScene at MIT: A nightmare on Ames Streethttp://news.mit.edu/2016/scene-at-mit-nightmare-on-ames-street-1031
Mon, 31 Oct 2016 00:00:00 -0400MIT News Officehttp://news.mit.edu/2016/scene-at-mit-nightmare-on-ames-street-1031<p>“People are afraid of artificial intelligence, from autonomous cars making unethical decisions in accidents, to robots taking our jobs and causing mass unemployment, to runaway superintelligent machines obliterating humanity. Engineering pioneer and inventor Elon Musk famously said that as we develop AI, we are 'summoning the demon.'</p>
<p>Halloween is a time when people celebrate the things that terrify them. So it seems like a perfect occasion for an MIT project that explores society's fear of AI. And what better way to do this than have an actual AI literally scare us in an immediate, visceral sense? Postdoc Pinar Yanardhag, visiting scientist Manuel Cebrian, and I used a <a href="https://arxiv.org/pdf/1508.06576v2.pdf" target="_blank">recently published</a>, open-source deep neural network algorithm to learn features of a haunted house and apply these features to a picture of the Media Lab.</p>
<p>We also launched the <a href="http://nightmare.mit.edu/" target="_blank">Nightmare Machine</a> website, where people can vote on which AI-generated horror images they find scary; these were generated using the same algorithm, combined with another recent algorithm for generating faces. So far, we've collected over 300,000 individual votes, and the results are clear: the AI demon is here, and it can terrify us. Happy Halloween!”</p>
<p><strong>—Iyad Rahwan, AT&amp;T Career Development Professor and an associate professor of media arts and sciences in the MIT Media Lab</strong></p>
<p><em>Have a creative photo of campus life you'd like to share? <a href="mailto:sceneatmit@mit.edu?subject=Scene%20at%20MIT">Submit</a> to Scene at MIT. </em></p>
The Media Lab has never looked spookier . . . Image: Pinar Yanardhag, Manuel Cebrian, Iyad Rahwan, and Andy Ryan.Scene at MIT, Artificial intelligence, Media Lab, Computer science and technology, Algorithms, School of Architecture and PlanningNew faculty, promotions, and leadership roles in the School of Architecture and Planninghttp://news.mit.edu/2016/new-faculty-promotions-leadership-school-of-architecture-and-planning-1028
Four join the SA+P faculty, while seven are recognized for work in art, architecture, and urbanism.Fri, 28 Oct 2016 17:00:01 -0400School of Architecture and Planninghttp://news.mit.edu/2016/new-faculty-promotions-leadership-school-of-architecture-and-planning-1028<p>The School of Architecture and Planning has announced that seven faculty members have been recognized by being promoted, granted tenure, or given significant new roles.</p>
<p>In addition, four new professors have joined the school in the Department of Architecture and the Program in Media Arts and Sciences. Their research ranges from architectural design to self-assembling materials to genetic engineering.</p>
<p>“This group adds considerable strength to our faculty,” says Hashim Sarkis, dean of the School&nbsp;of Architecture and Planning. “As individual practitioners and researchers, each brings&nbsp;a high level of creativity, imagination, and rigor to our capabilities. As a group, they offer new dimensions to our teaching and research explorations.”</p>
<p><strong>Recently promoted faculty</strong></p>
<p><a href="https://architecture.mit.edu/faculty/azra-aksamija" target="_blank">Azra Akšamija</a><strong>,</strong> an artist and architectural historian, has been promoted to associate professor without tenure in the Program in Art, Culture and Technology of the Department of Architecture, where she has taught since 2012. Her artistic work provides a framework for researching, analyzing, and intervening in contested sociopolitical realities. Her academic research focuses on the politics of cultural memory and the 1990s Yugoslav wars.&nbsp;Her book, “<a href="https://architecture.mit.edu/publication/mosque-manifesto-propositions-spaces-coexistence" target="_blank">Mosque Manifesto</a>,” (Revolver, 2015) explores transcultural aesthetics and cultural mobility in the context of Islam in the West.&nbsp;Akšamija&nbsp;holds master’s degrees from the Technical University Graz and Princeton University, and a PhD from MIT.&nbsp;Her work has been shown in the Generali Foundation Vienna, Liverpool Biennial, Sculpture Center New York, Secession Vienna, Manifesta 7, the Royal Academy of Arts London, Queens Museum, and the 54th Venice Biennale. She received the Aga Khan Award in 2013 for her prayer space design in the Islamic Cemetery in Altach, Austria.</p>
<p><a href="https://dusp.mit.edu/faculty/brent-d-ryan" target="_blank">Brent D. Ryan</a> has been promoted to associate professor of urban design and public policy with tenure in the Department of Urban Studies and Planning, where he was assistant professor from 2009 and associate professor without tenure from 2013. As head of the <a href="https://dusp.mit.edu/cdd/program/overview" target="_blank">City Design and Development Group</a>, he examines the aesthetics and practice of contemporary urban design, particularly in postindustrial cities and neighborhoods. Ryan is author of “Plural Urbanism” (MIT Press, forthcoming) and&nbsp;“<a href="http://www.upenn.edu/pennpress/book/14995.html" target="_blank">Design after Decline: How America Rebuilds Shrinking Cities</a>” (University of Pennsylvania Press, 2012), as well as a number of journal articles and contributions to edited volumes. Ryan taught at the Harvard Graduate School of Design and the University of Illinois at Chicago, where he was also co-director of the City Design Center. Ryan holds a BS in biology from Yale University, an MArch from Columbia University, and a PhD in urban design and planning from MIT.</p>
<p><a href="https://architecture.mit.edu/faculty/kristel-smentek" target="_blank">Kristel Smentek</a>, a historian of 18th-century European art and design with specializations in the history of collecting, the art market, and the European encounter with Asia, has been named associate professor with tenure in the History, Theory, and Criticism of Architecture and Art Program in the Department of Architecture. An assistant and associate professor at MIT since 2008, Smentek holds a BA from McGill University and an MA and PhD from the University of Delaware, all in art history. She has published extensively, including “<a href="https://www.routledge.com/Mariette-and-the-Science-of-the-Connoisseur-in-Eighteenth-Century-Europe/Smentek/p/book/9781472438027" target="_blank">Mariette and the Science of the Connoisseur in Eighteenth-Century Europe</a>” (Ashgate, 2014). She has received numerous fellowships and awards, and has curated several exhibitions. Smentek’s teaching includes courses on European art from the Renaissance to the present, 18th- and 19th-century European painting, ornament from the Rococo to the 1920s, the history and theory of the art museum, and the history of design.</p>
<p><strong>Faculty receiving new roles or titles</strong></p>
<p><a href="https://dusp.mit.edu/faculty/alan-berger" target="_blank">Alan Berger</a>, co-director of the <a href="http://cau.mit.edu/" target="_blank">Norman B. Leventhal Center for Advanced Urbanism</a>, has been named the Norman B. and Muriel Leventhal Professor of Advanced Urbanism. The founding director of <a href="https://dusp.mit.edu/cdd/project/p-rex" target="_blank">P-REX lab</a>, a research unit focused on environmental problems caused by urbanization, Berger studies the link between our consumption of natural resources and the waste and destruction of landscapes worldwide. He uses the term “systemic design” to describe the reintegration of waste and disvalued landscapes into our urbanized territories and regional ecologies. His books include “Infinite Suburbia” (forthcoming, 2017) and the award-winning “Drosscape: Wasting Land in Urban America” (2006) and “Reclaiming the American West” (2002), all from Princeton Architectural Press. Prior to coming to MIT in 2008, he was associate professor of landscape architecture at the Harvard Graduate School of Design. He holds a BS in agriculture/horticulture from the University of Nebraska at Lincoln and an MLA in landscape architecture from the University of Pennsylvania.</p>
<p><a href="https://dusp.mit.edu/faculty/phillip-clay" target="_blank">Phillip L. Clay</a>,<strong> </strong>retired professor in the Department of Urban Studies and Planning (DUSP), has been named advisor to the Dean of the School of Architecture and Planning, where he was a faculty member since 1976. A graduate of the University of North Carolina at Chapel Hill, Clay holds a doctorate from MIT. He served as MIT Chancellor from 2001 to 2011 and held other leadership positions at the Institute; he was also department head of DUSP, where he taught courses in <a href="https://dusp.mit.edu/user/2103/subjects" target="_blank">housing policy and poverty</a>. Clay is widely known for his work in U.S. housing policy and urban development. His current interests include organizational capacity in community-based nonprofits as well as the role of anchor institutions. Based on his work on MIT international strategies, he is also interested in the increasing role higher education can play in national development planning in less developed and emerging nations. His work now focuses on <a href="https://www.youtube.com/watch?v=VZH-ALaQdcI" target="_blank">higher education in Africa</a>.&nbsp; &nbsp;</p>
<p><a href="https://dusp.mit.edu/faculty/dennis-frenchman" target="_blank">Dennis Frenchman</a> has been named the Class of 1922 Professor of Urban Design and Planning in the Department of Urban Studies and Planning and is the inaugural SA+P faculty director of the <a href="http://designx.mit.edu/" target="_blank">DesignX</a> entrepreneurship accelerator. He is also on the faculty of the Center for Real Estate, where he founded (with David Geltner and Andrea Chegut) the new <a href="http://realestateinnovationlab.mit.edu/" target="_blank">Real Estate Innovation Lab</a>. &nbsp;Frenchman is a registered architect and founder of ICON architecture in Boston, an international architecture and urban design firm. His practice and research focuses on the transformation of cities; he is an expert on the application of digital technology to city design and led MIT research efforts to develop new models for clean energy urbanization in China. Frenchman holds a BA in architecture from the University of Cincinnati and an MArch and MCP from MIT, where he has taught since 1983.</p>
<p><a href="https://architecture.mit.edu/faculty/james-wescoat" target="_blank">James Wescoat</a> has been appointed co-director of the <a href="http://cau.mit.edu/" target="_blank">Norman B. Leventhal Center for Advanced Urbanism</a>. Since arriving at MIT in 2008, he has served as the Aga Khan Professor in the <a href="https://akpia.mit.edu/" target="_blank">Aga Khan Program for Islamic Architecture</a>, within the Department of Architecture. His research has concentrated on water systems in South Asia and the United States, including water research with the Tata Center for Technology and Design. His publications include “Water for Life: Water Management and Environmental Policy” (with Gilbert F. White, Cambridge University Press, 2003). Wescoat has also conducted research on historical waterworks of Mughal gardens and cities in India and Pakistan. He previously headed the Department of Landscape Architecture at the University of Illinois at Urbana-Champaign and has taught at the University of Colorado and the University of Chicago. He earned a BA in landscape architecture from Louisiana State University and an MA and PhD in geography from the University of Chicago.</p>
<p><strong>New faculty members</strong></p>
<p><a href="https://architecture.mit.edu/faculty/brandon-clifford" target="_blank">Brandon Clifford</a> has been appointed assistant professor in the Department of Architecture, where as Belluschi Lecturer since 2012 he has taught and conducted research, including the recent <a href="http://news.mit.edu/2015/designing-across-disciplines-1007" target="_blank">McKnelly Megalith</a> and <a href="http://news.mit.edu/2016/scene-at-mit-buoy-stone-0621" target="_self">Buoy Stone</a> projects. He received a BS in architecture from Georgia Tech and an MArch from Princeton University. From 2006 to 2009, he worked as project manager at Office dA in Boston and New York. Clifford was the 2011-2012 LeFevre Emerging Practitioner Fellow at The Ohio State University’s Knowlton School of Architecture. In 2008 he founded the award-winning practice <a href="http://www.matterdesignstudio.com/" target="_blank">Matter Design</a> with Wes McGee. His work has garnered inclusion in the Design Biennial Boston and won the Architectural League Prize for Young Architects and Designers as well as the prestigious SOM Prize, which launched his ongoing research into volumetric architecture. Clifford’s work is focused on reimagining the role of the architect in the digital era.</p>
<p><a href="https://www.media.mit.edu/people/esvelt" target="_blank">Kevin Esvelt</a> has been named assistant professor of media arts and sciences. He leads the MIT Media Lab’s <a href="https://www.media.mit.edu/research/groups/sculpting-evolution" target="_blank">Sculpting Evolution</a> research group, which invents new ways to study and influence the evolution of ecosystems for the benefit of humanity and the natural world. Before joining the Media Lab in January, Esvelt wove many areas of science into novel approaches to ecological engineering. He invented phage-assisted continuous evolution (PACE), a synthetic microbial ecosystem for rapidly evolving biomolecules, in the laboratory of David R. Liu at Harvard University. At the Wyss Institute, he worked with George Church to develop the CRISPR system, including its use for gene drive and safeguards. He received BA degrees in biology and chemistry from Harvey Mudd College and a PhD in biochemistry from Harvard. He is a winner of the Harold M. Weintraub Award, the Hertz Foundation Thesis Prize, and the NIH K99, and was among the <em>MIT Technology Review</em> 35 Innovators Under 35 in 2016.</p>
<p><a href="https://architecture.mit.edu/faculty/sheila-kennedy" target="_blank">Sheila Kennedy</a> has been appointed professor in the Department of Architecture.&nbsp;Kennedy received a BA in history, philosophy, and literature from Wesleyan University and studied architecture at the Ecole National Supérieure des Beaux Arts in Paris. She received her masters of architecture from the Graduate School of Design at Harvard University, where she graduated with distinction — the school’s highest academic honor — and received the SOM National Traveling Fellowship. With her partner Juan Frano Violich, Kennedy is a founding principal of <a href="http://www.kvarch.net/" target="_blank">KVA Matx</a>, an interdisciplinary professional practice that is widely recognized for innovation in architecture, research on the evolving culture of materials, and the design of resilient, “soft” infrastructure and public space. Kennedy’s work in practice has received Progressive Architecture Awards and American Institute of Architects National Design Excellence Awards for built work in the United States and abroad. Kennedy received a 2014 Holcim Foundation Design Award, the 2014 Design Innovator Award, and the 2014 Berkeley-Rupp Award Prize of $100,000. She is a recipient of the inaugural 2016 American Architecture Prize for her design work with digital brick in the <a href="http://www.kvarch.net/projects/83" target="_blank">Tozzer Anthropology Building</a>. Kennedy’s design work has been exhibited at the Venice Biennale, MoMA, the National Design Museum, the Rotterdam Biennale, the Vitra Design Museum, and the TED conference in California. Her work has been widely published and is featured on National Public Radio, BBC World News, CBS News, The Discovery Channel, CNN Principal Voices, <em>Wired, The Economist,</em> and <em>The New York Times.</em></p>
<p><a href="https://architecture.mit.edu/faculty/skylar-tibbits" target="_blank">Skylar Tibbits</a> has been named assistant professor in the Department of Architecture, where he has been a lecturer and research scientist since 2010, teaching graduate and undergraduate design studios and co-teaching MAS.863/4.140 <font face="bitstream vera sans,arial,helvetica,sans-serif">(</font>How to Make (Almost) Anything), a seminar at MIT’s Media Lab. He directs the <a href="http://www.selfassemblylab.net/" target="_blank">MIT Self-Assembly Lab</a>, which focuses on programmable material technologies for novel manufacturing, products, and construction processes. Tibbits has a professional degree in architecture with a minor in experimental computation from Philadelphia University. At MIT, he received an SMArchS in design and computation and an MS in computer science. Tibbits has worked at design offices including Zaha Hadid Architects, Asymptote Architecture, and Point b Design. He has designed and built large-scale installations at galleries around the world, and his work has been published extensively. In 2007, Tibbits founded a multidisciplinary design practice, <a href="http://www.sjet.us/" target="_blank">SJET</a>. He was awarded a 2013 Architectural League Prize, among other honors.</p>
First row: (l-r) Azra Akšamija, Phillip Clay, Dennis Frenchman, Skylar Tibbits. Second row: (l-r) James Wescoat, Sheila Kennedy, Alan Berger, Kristel Smentek. Third row: (l-r) Brandon Clifford, Brent Ryan, Kevin Esvelt.Faculty, Awards, honors and fellowships, Architecture, Urban studies and planning, Real estate, Media Lab, Arts, Culture and Technology, Arts, Design, School of Architecture and PlanningQuantifying urban revitalizationhttp://news.mit.edu/2016/quantifying-urban-revitalization-1024
Combining cellphone data with perceptions of public spaces could help guide urban planning.
Mon, 24 Oct 2016 00:00:01 -0400Larry Hardesty | MIT News Officehttp://news.mit.edu/2016/quantifying-urban-revitalization-1024<p>For years, researchers at the MIT Media Lab have been developing a <a href="http://pulse.media.mit.edu/">database of images</a> captured at regular distances around several major cities. The images are scored according to different visual characteristics — how safe the depicted areas look, how affluent, how lively, and the like.</p>
<p>In a <a href="http://arxiv.org/pdf/1608.00462v1.pdf">paper</a> they presented last week at the Association for Computing Machinery’s Multimedia Conference, the researchers, together with colleagues at the University of Trento and the Bruno Kessler Foundation, both in Trento, Italy, compared these safety scores, of neighborhoods in Rome and Milan, to the frequency with which people visited these places, according to cellphone data.</p>
<p>Adjusted for factors such as population density and distance from city centers, the correlation between perceived safety and visitation rates was strong, but it was particularly strong for women and people over 50. The correlation was negative for people under 30, which means that males in their 20s were actually more likely to visit neighborhoods generally perceived to be unsafe than to visit neighborhoods perceived to be safe.</p>
<p>In the same paper, the researchers also identified several visual features that are highly correlated with judgments that a particular area is safe or unsafe. Consequently, the work could help guide city planners in decisions about how to revitalize declining neighborhoods.</p>
<p>“There’s a big difference between a theory and a fact,” says Luis Valenzuela, an urban planner and professor of design at Universidad Adolfo Ibáñez in Santiago, Chile, who was not involved in the research. “What this paper does is put the facts on the table, and that’s a big step. It also opens up the ways in which we can build toward establishing the facts in different contexts. It will bring up a lot of other research, in which, I don’t have any doubt, this will be put up as a seminal step.”</p>
<p>Valenzuela is particularly struck by the researchers’ demographically specific results. “That, I would say, is quite a big breakthrough in urban-planning research,” he says. “Urban planning — and there’s a lot of literature about it — has been largely designed from a male perspective. ... This research gives scientific evidence that women have a specific perception of the appearance of safety in the city.”</p>
<p><strong>Corroborations</strong></p>
<p>“Are the places that look safer places that people flock into?” asks César Hidalgo, the Asahi Broadcast Corporation Career Development Associate Professor of Media Arts and Sciences and one of the senior authors on the new paper. “That should connect with actual crime because of two theories that we mention in the introduction of the paper, which are the defensible-space theory of Oscar Newman and Jane Jacobs’ eyes-on-the-street theory.” Hidalgo is also the director of the Macro Connections group at MIT.</p>
<p>Jacobs’ theory, Hidalgo says, is that neighborhoods in which residents can continuously keep track of street activity tend to be safer; a corollary is that buildings with street-facing windows tend to create a sense of safety, since they imply the possibility of surveillance. Newman’s theory is an elaboration on Jacobs’, suggesting that architectural features that demarcate public and private spaces, such as flights of stairs leading up to apartment entryways or archways separating plazas from the surrounding streets, foster the sense that crossing a threshold will bring on closer scrutiny.</p>
<p>The researchers caution that they are not trained as urban planners, but they do feel that their analysis identifies some visual features of urban environments that contribute to perceptions of safety or unsafety. For one thing, they think the data support Jacobs’ theory: Buildings with street-facing windows appear to increase people’s sense of safety much more than buildings with few or no street-facing windows. And in general, upkeep seems to matter more than distinctive architectural features. For instance, everything else being equal, green spaces increase people’s sense of safety, but poorly maintained green spaces lower it.</p>
<p>Joining Hidalgo on the paper are Nikhil Naik, a PhD student in media arts and sciences at MIT; Marco De Nadai, a PhD student at the University of Trento; Bruno Lepri, who heads the Mobile and Social Computing Lab at the Kessler Foundation; and five of their colleagues in Trento. Both De Nadai and Lepri are currently visiting scholars at MIT.</p>
<p>Hidalgo’s group launched its project to quantify the emotional effects of urban images in 2011, with a website that presents volunteers with pairs of images and asks them to select the one that ranks higher according to some criterion, such as safety or liveliness. On the basis of these comparisons, the researchers’ system assigns each image a score on each criterion.</p>
<p>So far, volunteers have performed more than 1.4 million comparisons, but that’s still not nearly enough to provide scores for all the images in the researchers’ database. For instance, the images in the data sets for Rome and Milan were captured every 100 meters or so. And the database includes images from 53 cities.</p>
<p><strong>Automations </strong></p>
<p>So three years ago, the researchers began using the scores generated by human comparisons to train a machine-learning system that would assign scores to the remaining images. “That’s ultimately how you’re able to take this type of research to scale,” Hidalgo says. “You can never scale by crowdsourcing, simply because you’d have to have all of the Internet clicking on images for you.”</p>
<p>The cellphone data, which was used to determine how frequently people visited various neighborhoods, was provided by Telecom Italia Mobile and identified only the cell towers to which users connected. The researchers mapped the towers’ broadcast ranges onto the geographic divisions used in census data, and compared the number of people who made calls from each region with that region’s aggregate safety scores. They adjusted for population density, employee density, distance from the city center, and a standard poverty index.</p>
<p>To determine which features of visual scenes correlated with perceptions of safety, the researchers designed an algorithm that selectively blocked out apparently continuous sections of images — sections that appear to have clear boundaries. The algorithm then recorded the changes to the scores assigned the images by the machine-learning system.</p>
Researchers used sample images, like the ones on the top row, to identify several visual features that are highly correlated with judgments that a particular area is safe or unsafe. The left side shows a low level of safety while the right shows a high level. Highlighted areas on the middle row show “unsafe” areas while the bottom row shows “safe” areas in the image.
Courtesy of the researchersResearch, School of Architecture and Planning, Artificial intelligence, Big data, Computer science and technology, Crowdsourcing, Machine learning, Media Lab, Women, Urban studies and planning, Algorithms, CitiesEntrepreneurship by designhttp://news.mit.edu/2016/entrepreneurship-designx-accelerator-1017
New DesignX accelerator speeds innovation at the School of Architecture and Planning.Mon, 17 Oct 2016 14:05:02 -0400Tom Gearty | School of Architecture and Planninghttp://news.mit.edu/2016/entrepreneurship-designx-accelerator-1017<p>The School of Architecture and Planning (SA+P) has launched a new entrepreneurship accelerator, <a href="http://designx.mit.edu/" target="_blank">DesignX</a>, to cultivate and assist students and faculty developing products, systems, and companies that focus on design and the built environment.</p>
<p>“Design is the art and science of improving the interface between human beings and their environment,” says Hashim Sarkis, dean of the School of Architecture and Planning. “DesignX will support entrepreneurs who work in this arena and help them speed their innovations to the marketplace.”</p>
<p>Nicknamed “DESx,” the program launches this fall to provide SA+P students with a structured set of classes, mentorship, seed funding, research, and links to the global network of SA+P alumni entrepreneurs. The centerpiece of the program is a four-month, for-credit accelerator workshop offered during the January Independent Activities Period (IAP) and the spring semester. To help students make the critical leap from project to startup, participants will have the chance to connect with external investors and industry partners to make their pitches.</p>
<p>“Innovators and entrepreneurs in SA+P draw upon broad strengths in design-oriented disciplines and deep expertise in the built environment, including architecture, planning, art, media, and real estate development,” says Dennis Frenchman, the Class of 1922 Professor of Urban Design and Planning and faculty director of the program. “Introducing DesignX into MIT’s ecosystem for entrepreneurship will offer these individuals resources specifically tailored to develop, scale, and deliver their ideas for real-world impact.”</p>
<p>The DESx program is structured in three phases characterized as “learn, launch, and leap.” Individuals or teams — which can include faculty members and which must have at least one member from SA+P — will:</p>
<ul>
<li>Learn: In the fall semester, student team members prepare for the program by selecting from a short list of elective courses in entrepreneurship and innovation offered across MIT. At the end of the semester, they apply to the program by pitching their idea to a committee including a range of design professionals, entrepreneurs, and faculty drawn from each of SA+P’s departments.</li>
<li>Launch: Selected participants receive up to $15,000 in seed funding and must enroll in the DESx accelerator workshop in the spring semester. The accelerator is structured to help teams develop their design concepts and create viable business plans and prototypes with the aid of a tailored entrepreneurial curriculum, dedicated mentors, and a supportive industry network.</li>
<li>Leap: Participants conclude the accelerator workshop with a pitch to outside investors or other potential partners — and start their post-MIT careers equipped to launch and scale their ventures.</li>
</ul>
<p>“DESx will integrate the initial stages of building a startup into participants’ education, so that students make rapid progress while they are at MIT and are positioned to succeed as they take their first steps toward entrepreneurship,”&nbsp;says Gilad Rosenzweig, the program’s executive director.</p>
<p>A signature component of DESx is research on innovation itself. A team of researchers led by Andrea Chegut, research scientist in the MIT Center for Real Estate and director of the MIT Real Estate Innovation Lab, is studying the landscape for startup companies that emerge from SA+P, and globally, to understand what works — and what doesn’t — when deploying innovations in cities and design.</p>
<p>“A recent study of MIT alumni shows that more than 1,200 companies have already come out of the School of Architecture and Planning. We’re studying who those firms are, what makes them tick, and what makes them distinct from other types of businesses,” Chegut says. “We’ll apply this knowledge to DESx, to enable our entrepreneurs understand the nuts and bolts they need to form successful organizations.”</p>
<p>SA+P firms contribute to both the physical and digital realms, and the impact of these firms can be felt worldwide, says Matthew Claudel, a PhD student in the Department of Urban Studies and Planning and head of partnerships for the DESx team. Claudel cites recently founded SA+P firms that are producing innovation-driven enterprises across industries, such as <a href="http://www.courbanize.com/" target="_blank">coUrbanize</a>, a digital platform for linking cities, communities, and developers; <a href="http://coeio.com/" target="_blank">Coeio</a>, green burial products for people and pets; <a href="https://neighborly.com/" target="_blank">Neighborly</a>, a crowdsourced investment platform for municipal debt; <a href="https://www.mapdwell.com/en" target="_blank">Mapdwell</a>, an architectural and geographic information system solution for finding solar panel hotspots on building roofs; and <a href="https://www.eyenetra.com/" target="_blank">EyeNetra</a>, which produces smartphone-powered devices for eye testing.</p>
<p>The innovation landscape has changed rapidly in recent years, explains Frenchman, who started his own design firm three years after graduating from MIT in the 1970s with dual master’s degrees in architecture and city planning. Disappearing are the days when a young graduate in design, architecture, media, art, or planning would work as an intern for years before gaining acceptance into a profession or venturing out to start his or her own company. “Cities and circumstances are changing so fast that, in a sense, we’re all professionals now. Faculty, students, companies, and communities are co-creating new solutions to problems,” he says.</p>
<p>Beyond providing a platform for innovation, DESx aims to foster the sense of social responsibility that motivates many SA+P students.</p>
<p>“When you’re in the complexity of a city, you’re trying to invent with so many different stakeholders at once. You’re dealing with people’s lives,” Chegut says. “The vision for DESx is to create a place that’s inventive and creative from a design focus, but that’s also grounded in responsible entrepreneurship.”</p>
<p>In some ways, DESx seeks to harness the innovative social spirit that already surrounds SA+P, says Frenchman.</p>
<p>“Our students and alumni are passionate. Each of them wants to make the world better, more functional, but also more just and equitable,” says Frenchman. “And through the companies they’ve founded, our SA+P graduates have improved a lot of lives. DESx will build upon this remarkable tradition.”</p>
The team behind DesignX, a new design entrepreneurship accelerator from the MIT School of Architecture and Planning: (left to right) Gilad Rosenzweig, executive director for DesignX; Dennis Frenchman, the Class of 1922 Professor of Urban Design and Planning and faculty director of the program; Andrea Chegut, research scientist in the MIT Center for Real Estate and director of the MIT Real Estate Innovation Lab; and Matthew Claudel, a PhD student in the Department of Urban Studies and Planning and head of partnerships for DesignX.Photo: Tom Gearty/School of Architecture and PlanningStartups, School of Architecture and Planning, Center for Real Estate, Urban studies and planning, Architecture, Media Lab, Arts, Culture and Technology, Innovation and Entrepreneurship (I&E), Innovation Initiative, Classes and programs, Design, Cities, Mentoring, Education, teaching, academics, Entrepreneurship, Undergraduate, Graduate, postdoctoral, FundingProfessor Emeritus Whitman Richards dies at 84http://news.mit.edu/2016/professor-emeritus-whitman-richards-dies-1017
Longtime professor and beloved advisor was known for advances in experimental and theoretical studies of vision, perception, and cognition.Mon, 17 Oct 2016 13:30:01 -0400Department of Brain and Cognitive Scienceshttp://news.mit.edu/2016/professor-emeritus-whitman-richards-dies-1017<p>Whitman Richards '53, PhD '65, professor emeritus of cognitive sciences and of media arts and sciences and principal investigator in the Computer Science and Artificial Intelligence Laboratory, died on Sept. 16 after a long battle with myelofibrosis. One of the first four PhD graduates of the Department of Brain and Cognitive Sciences (BCS), his more than 60 years at MIT were marked by a dedication to the experimental and theoretical study of vision, perception, and cognition.</p>
<p>Richards began his affiliation with MIT as an undergraduate, matriculating in 1950. His decision to return to MIT for graduate work was greatly inspired by a meeting with BCS founder and then department head Professor Hans-Lukas Teuber.</p>
<p>“In the 1960’s, with the advent of accessible computer technology, the development of information theory, and the single electrode, there was renewed excitement about prospects for modeling and understanding mind and brain,” Richards said in a 2004 interview. “Teuber’s charisma and broad vision for a new psychol­ogy was a powerful draw [to the department]. …There was a unique opportunity for a non-traditional grounding in a discipline otherwise mired in tradition.”</p>
<p>Richards’ early research pursued traditional psychophysical experimental methods to study the mechanisms of color perception and stereovision. In the 1970s, his research direction and methodology shifted dramatically after meeting noted physiologist David Marr, who he eventually recruited to MIT. Instead of relying on the traditional experimental methods that had characterized his early career, Richards, Marr, and colleagues began to look for the deep, underlying mathematical principles that allowed a human or artificial visual system to look at the world and make accurate inferences about what the system saw or perceived.</p>
<p>“The breadth of his research was really quite remarkable,” says Josh Tenenbaum, MIT professor of computational cognitive science and former Richards graduate student. “As his career developed, he transitioned from studying the parts of vison that are very close to neural mechanisms, to computational representations of perception, to Bayesian statistical models of perception and cognition. He became almost a computational social scientist — he was incredibly flexible in his thinking.”&nbsp;</p>
<p>Richards’ passionate advocacy for the computational approach to studying visual perception helped to create and nurture the department’s early computational research initiatives.</p>
<p>“Whit’s connection with David Marr back in the late '70s is really the genesis of modern computational social science today,” says MIT Professor Alex Pentland, the Toshiba Professor of Media Arts and Science and a former Richards graduate student.</p>
<p>Alongside his impressive research legacy, which includes the publication of eight books and over 200 articles, Richards was also regarded by his students and colleagues as a superlative mentor. Many of his former students have found success in a variety of different fields, including psychology, cognitive science, computer science, media, computer graphics, and the defense industry.</p>
<p>“Whitman was an incredibly dedicated advisor. His strategy was to have very few students and make a huge personal investment in each of them,” says John Rubin, a former graduate student of Richards and current executive producer with Tangled Bank Studios at the Howard Hughes Medical Institute. “He was really great at keeping enthusiasm high in his lab, which took all kinds of forms, but included croquet parties at his home, which were terrifically fun. He was always available and, in fact, it was hard for me to keep up with the amount of time he wanted to devote to our joint work! He was indefatigable and devoted.”</p>
<p>Richards is survived by his wife of 54 years, Waltraud Weller Richards, and three daughters: Diana Richards Doyle and husband Mark S. Doyle of Green Cove Springs, Florida; Sylvia Richards-Gerngross and husband Tillman Gerngross of Hanover, New Hampshire; and Eleanor "Nora" Richards Bender and husband Thomas A. Bender of Dedham, Massachusetts. He is also survived by his two siblings: Lincoln K. Richards and wife Gerda of Wellesley, Massachusetts, and Sylvia Richards Messner of Cave Creek, Arizona; and by two grandchildren, Morgan Kelly Doyle and Serafina Richards-Gerngross. Memorial services will be private.</p>
MIT Professor Emeritus Whitman RichardsPhoto: Webb Chappell/MIT Media LabFaculty, Obituaries, Brain and cognitive sciences, Media Lab, Computer Science and Artificial Intelligence Laboratory (CSAIL), School of Science, School of Engineering, School of Architecture and Planning, Alumni/aePresident Obama discusses artificial intelligence with Media Lab Director Joi Itohttp://news.mit.edu/2016/president-obama-discusses-artificial-intelligence-media-lab-joi-ito-1014
One-on-one conversation in WIRED focuses on advancements in artificial intelligence and how society should respond to related concerns.Fri, 14 Oct 2016 17:45:01 -0400MIT Media Labhttp://news.mit.edu/2016/president-obama-discusses-artificial-intelligence-media-lab-joi-ito-1014<p>When President Barack Obama agreed to guest-edit the November issue of <em>WIRED</em>, he selected MIT Media Lab Director Joi Ito for an exchange of ideas about artificial intelligence (AI). Their recent interview at the White House is featured in the latest online issue of <em><a href="http://www.wired.com/" target="_blank">WIRED</a>, </em>published on Oct. 12.</p>
<p>The <a href="http://www.wired.com/2016/10/president-obama-mit-joi-ito-interview" target="_blank">one-on-one conversation</a>, moderated by <em>WIRED</em> Editor-in-Chief<em> </em>Scott Dadich, ran the gamut of topics at the intersection of societal needs, ethics, and technology — from cybersecurity to self-driving cars; from the roles of government, industry, and academia to the lack of diversity in tech; from “moonshot” motivations to innovation at the margins; and from neurodiversity to <em>Star Trek</em>. All this was covered in the context of AI and <a href="http://www.pubpub.org/pub/extended-intelligence?version=12" target="_blank">extended intelligence</a> (EI), which uses machine learning to augment human capabilities.</p>
<div class="cms-placeholder-content-video"></div>
<p><strong>It’s a societal thing</strong></p>
<p>Ito says his overarching message for Obama in their conversation was that AI — and the space it occupies — is no longer just a computer science issue. “It’s also very much a societal thing.” And we shouldn’t underestimate the difficulties, he adds. “We can’t think that machines will just figure it all out for us. Everyone needs to recognize the importance of understanding how AI behaves, and we have to address the critical need to build societal values into AI.” Ito is encouraged by what he characterizes as the president’s “<a href="http://www.whitehouse.gov/administration/eop/ostp/divisions/cto" target="_blank">amazing team</a>,” which includes U.S. Chief Technology Officer Megan Smith ’86, SM ’88, along with Deputy U.S. CTOs Alexander Macgillivray, formerly a lawyer for Twitter, and Ed Felten of Princeton University.</p>
<p>“And the role of the Media Lab is to be a connective tissue between computer science, and the social sciences, and the lawyers, and the philosophers," says Ito. "What’s cool is that President Obama gets that.”</p>
<p>“Where should the center of research live, if there even is a center?” Dadich asked both Obama and Ito. In the <a href="http://www.wired.com/2016/10/president-obama-mit-joi-ito-interview" target="_blank">article</a>, the president noted that “part of the problem that we’ve seen is that our general commitment as a society to basic research has diminished. Our confidence in collective action has been chipped away, partly because of ideology and rhetoric.” Ito said, “Historically, it probably would have been a group of academics with help from a government. But right now, most of the billion-dollar labs are in the business.” He later explained that the Media Lab works in the space between the disciplines — the antidisciplinary space between humans and computers, and between networks and society: “MIT has been and continues to be a center for AI research, and now we see the need for a more fully integrated approach, cutting across all disciplines. I hope the Media Lab can make a meaningful contribution to that with our method and experience.”</p>
<p><strong>What’s next?</strong></p>
<p>Above all, says Ito, “What’s important is to find the people who want to use AI for good — communities and leaders — and figure out how to help them use it.” To that end, Ito says, the MIT Media Lab is committed to not only exploring the technology of AI and EI but also addressing their ramifications for humankind.</p>
<p>“We’re already talking about these issues with people across the full spectrum of society,” Ito adds. “This is a crucial area that cannot be ignored.”</p>
MIT Media Lab Director Joi Ito (left), WIRED Editor-in-Chief Scott Dadich (center), and U.S. President Barack Obama confer in the Roosevelt Room of the White House.Photo: WIREDPresident Obama, Artificial intelligence, Technology and society, Machine learning, Media Lab, School of Architecture and Planning, FacultyInspiring the next generation of scientists and engineers with Boston STEM Weekhttp://news.mit.edu/2016/inspiring-scientists-and-engineers-boston-stem-week-1007
MIT collaborates with Boston Public Schools to bring hands-on curricula to local students.Fri, 07 Oct 2016 12:20:01 -0400Office of Digital Learninghttp://news.mit.edu/2016/inspiring-scientists-and-engineers-boston-stem-week-1007<p><a href="http://bostonstemweek.org/" target="_blank">Boston STEM Week</a> is here, and MIT is right in the middle of it. The first-of-its-kind, immersive learning experience is underway in sixth, seventh, and eighth grades in Boston Public Schools. From October 3rd to 7th, students are turning into scientists and engineers, classrooms are transforming into learning labs, and regularly scheduled classes are being replaced by innovative science, technology, engineering, and math (STEM) curricula developed by <a href="http://i2learning.org/" target="_blank">i2 Learning</a> in collaboration with MIT, MathWorks, and other organizations.</p>
<p>Boston STEM Week introduces the engineering design process to more than 6,500 students across 36 Boston middle schools. The focus is on building 21st-century skills in a proven way that engages students in science and engineering practices, such as analyzing data, problem solving, and communicating information. Students and teachers work in teams to solve real-world challenges in settings that encourage hands-on experimentation, critical thinking and collaboration. The goal is to provide an opportunity for all&nbsp;students to gain confidence in these important fields — and perhaps inspire them to pursue a future in STEM.</p>
<p><strong>Curricula developed by MIT</strong></p>
<p>MIT collaborated closely on curricula for three of the six courses offered during Boston STEM Week:</p>
<ul>
<li><a href="http://bostonstemweek.org/courses/ka8oup42rnmdquwflw60liyv874chw" target="_blank">Building an Interactive, Friendly Monster</a>: Developed by the <a href="http://www.media.mit.edu/" target="_blank">MIT Media Lab</a>, this course explores the fundamentals of electronics and programming. Using a small computer, conductive thread and some simple programming, students combine sewing and circuitry to create an interactive stuffed monster that lights up, makes noise, and responds to touch.</li>
<li><a href="http://bostonstemweek.org/courses/80u48y3o77n8i71m30m4q6zc3a92ny" target="_blank">Digital Game Design</a>: Using <a href="https://scratch.mit.edu/" target="_blank">Scratch</a>, the popular programming language developed by MIT Media Lab’s <a href="https://www.media.mit.edu/research/groups/lifelong-kindergarten" target="_blank">Lifelong Kindergarten</a> group, students learn to create an original video game with custom graphics, sound effects, and music. Students are immersed in the fundamentals of computer programming as they build a game from initial concept, test it with player feedback, and share it with peers.</li>
<li><a href="http://bostonstemweek.org/courses/6e2pyjmp5xl3sd9orp9w9jv64zofs8" target="_blank">Kinetic Sculpture</a>: Developed by the <a href="http://edgerton.mit.edu/k-12" target="_blank">MIT Edgerton Center</a>, this course introduces key concepts and skills like balance, gravity, force, gearing, energy sources and design-oriented thinking. Students examine the moving sculpture work of Alexander Calder, George Rhodes, Anne Lilly, and Arthur Ganson, and use the principles they learn as the basis for their own large-scale, chain-reaction creations.</li>
</ul>
<p>“Kinetic Sculpture is one of the most engaging courses we’ve developed for i2, and it’s seen great success for students, teachers and parents,” says Robert Vieth, K-12 STEM outreach coordinator for the MIT Edgerton Center. “By combining the core concepts of STEM with art, and a lot of physics principles too, the kids learn that it’s okay to have fun while learning about science and engineering — and that’s it’s okay to fail because that’s the first step to success.”</p>
<p>“MIT’s commitment to hands-on learning is infectious. It has been inspiring to collaborate with them to bring ‘mens et manus’ to Boston’s middle school children,” adds Ethan Berman, i2 Learning founder, referring to MIT's motto of "mind and hand."</p>
<p><strong>A long tradition of innovation in STEM education </strong></p>
<p>While MIT, through the Edgerton Center and Media Lab, has collaborated with i2 Learning on curriculum and content development for several years, Boston STEM Week will serve to both grow that relationship and further the university’s mission to bring MIT’s unique hands-on and minds-on learning approach to younger students. Providing engaging, dynamic STEM curriculum at scale is a central aim of MIT’s <a href="http://pk12.mit.edu/" target="_blank">pK-12 Action Group</a>, a cross-Institute collaboration of over 125 pre-K-12 outreach programs on campus. The scope of Boston STEM Week — reaching thousands of students and hundreds of teachers across 36 urban schools in just five days — is unprecedented.</p>
<p>“What I find so extraordinary about what Boston STEM Week is doing is that we are inspiring curiosity in young minds, a state that is fundamental to good learning,” says Sanjay Sarma, vice president for open learning and head of the MIT Office of Digital Learning. “Boston STEM Week provides an opportunity for delivering MIT’s curriculum at scale that we hope will inspire a next generation of future engineers, scientists, and innovators.”</p>
<p><strong>Encouraging more STEM in schools</strong></p>
<p>All teachers in the participating middle schools — STEM and non-STEM alike — took part in professional education workshops over the summer to prepare for this week. Professional development is an important goal for the pK-12 Action Group as well, with the hope that by helping non-STEM teachers build confidence with engineering projects, more STEM activities will be integrated into school curriculum beyond this week.</p>
<p>In fact, that’s one reason the pK-12 Action Group is sending volunteers to Boston STEM Week. The volunteers are helping teachers run the projects while also taking time to speak to students about STEM careers — and the bright futures these students have ahead of them.</p>
Boston Mayor Martin Walsh attends a Boston STEM Week kickoff event.Photo: Boston Mayor's OfficeSTEM education, K-12 education, Office of Digital Learning, online learning, Media Lab, Edgerton, Cambridge, Boston and region, Education, teaching, academics, School of Architecture and PlanningToward visible-light-based imaging for medical devices, autonomous vehicleshttp://news.mit.edu/2016/all-photons-imaging-algorithm-0929
System accounts for the deflection of light particles passing through animal tissue or fog. Thu, 29 Sep 2016 10:45:53 -0400Larry Hardesty | MIT News Officehttp://news.mit.edu/2016/all-photons-imaging-algorithm-0929<p>MIT researchers have developed a technique for recovering visual information from light that has scattered because of interactions with the environment — such as passing through human tissue.</p>
<p>The technique could lead to medical-imaging systems that use visible light, which carries much more information than X-rays or ultrasound waves, or to computer vision systems that work in fog or drizzle. The development of such vision systems has been a major obstacle to self-driving cars.</p>
<p>In experiments, the researchers fired a laser beam through a “mask” — a thick sheet of plastic with slits cut through it in a certain configuration, such as the letter A &nbsp;— and then through a 1.5-centimeter “tissue phantom,” a slab of material designed to mimic the optical properties of human tissue for purposes of calibrating imaging systems. Light scattered by the tissue phantom was then collected by a high-speed camera, which could measure the light’s time of arrival.</p>
<p>From that information, the researchers’ algorithms were able to reconstruct an accurate image of the pattern cut into the mask.</p>
<div class="cms-placeholder-content-video"></div>
<p>“The reason our eyes are sensitive only in this narrow part of the spectrum is because this is where light and matter interact most,” says Guy Satat, a graduate student at the MIT Media Lab and first author on the new paper. “This is why X-ray is able to go inside the body, because there is very little interaction. That’s why it can’t distinguish between different types of tissue, or see bleeding, or see oxygenated or deoxygenated blood.”</p>
<p>The imaging technique’s potential applications in automotive sensing may be even more compelling than those in medical imaging, however. Many experimental algorithms for guiding autonomous vehicles are highly reliable under good illumination, but they fall apart completely in fog or drizzle; computer vision systems misinterpret the scattered light as having reflected off of objects that don’t exist. The new technique could address that problem.</p>
<p>Satat’s coauthors on the new paper, published today in <em>Scientific Reports,</em> are three other members of the Media Lab’s Camera Culture group: Ramesh Raskar, the group’s leader, Satat’s thesis advisor, and an associate professor of media arts and sciences; Barmak Heshmat, a research scientist; and Dan Raviv, a postdoc.</p>
<p><strong>Expanding circles</strong></p>
<p>Like many of the Camera Culture group’s projects, the new system relies on a pulsed laser that emits ultrashort bursts of light, and a high-speed camera that can distinguish the arrival times of different groups of photons, or light particles. When a light burst reaches a scattering medium, such as a tissue phantom, some photons pass through unmolested; some are only slightly deflected from a straight path; and some bounce around inside the medium for a comparatively long time. The first photons to arrive at the sensor have thus undergone the least scattering; the last to arrive have undergone the most.</p>
<p>Where previous techniques have attempted to reconstruct images using only those first, unscattered photons, the MIT researchers’ technique uses the entire optical signal. Hence its name: all-photons imaging.</p>
<p>The data captured by the camera can be thought of as a movie — a two-dimensional image that changes over time. To get a sense of how all-photons imaging works, suppose that light arrives at the camera from only one point in the visual field. The first photons to reach the camera pass through the scattering medium unimpeded: They show up as just a single illuminated pixel in the first frame of the movie.</p>
<p>The next photons to arrive have undergone slightly more scattering, so in the second frame of the video, they show up as a small circle centered on the single pixel from the first frame. With each successive frame, the circle expands in diameter, until the final frame just shows a general, hazy light.</p>
<p>The problem, of course, is that in practice the camera is registering light from many points in the visual field, whose expanding circles overlap. The job of the researchers’ algorithm is to sort out which photons illuminating which pixels of the image originated where.</p>
<p><strong>Cascading probabilities</strong></p>
<p>The first step is to determine how the overall intensity of the image changes in time. This provides an estimate of how much scattering the light has undergone: If the intensity spikes quickly and tails off quickly, the light hasn’t been scattered much. If the intensity increases slowly and tails off slowly, it has.</p>
<p>On the basis of that estimate, the algorithm considers each pixel of each successive frame and calculates the probability that it corresponds to any given point in the visual field. Then it goes back to the first frame of video and, using the probabilistic model it has just constructed, predicts what the next frame of video will look like. With each successive frame, it compares its prediction to the actual camera measurement and adjusts its model accordingly. Finally, using the final version of the model, it deduces the pattern of light most likely to have produced the sequence of measurements the camera made.</p>
<p>One limitation of the current version of the system is that the light emitter and the camera are on opposite sides of the scattering medium. That limits its applicability for medical imaging, although Satat believes that it should be possible to use fluorescent particles known as fluorophores, which can be injected into the bloodstream and are already used in medical imaging, as a light source. And fog scatters light much less than human tissue does, so reflected light from laser pulses fired into the environment could be good enough for automotive sensing.</p>
<p>“People have been using what is known as time gating, the idea that photons not only have intensity but also time-of-arrival information and that if you gate for a particular time of arrival you get photons with certain specific path lengths and therefore [come] from a certain specific depth in the object,” says Ashok Veeraraghavan, an assistant professor of electrical and computer engineering at Rice University. “This paper is taking that concept one level further and saying that even the photons that arrive at slightly different times contribute some spatial information.”</p>
<p>“Looking through scattering media is a problem that’s of large consequence,” he adds. But he cautions that the new paper does not entirely solve it. “There’s maybe one barrier that’s been crossed, but there are maybe three more barriers that need to be crossed before this becomes practical,” he says.</p>
In experiments, the researchers fired a laser beam through a “mask” — a thick sheet of plastic with slits cut through it in a certain configuration, such as the letter A — and then through a 1.5-centimeter “tissue phantom,” a slab of material designed to mimic the optical properties of human tissue for purposes of calibrating imaging systems. Light scattered by the tissue phantom was then collected by a high-speed camera, which could measure the light’s time of arrival.Image courtesy of the researchers.Research, School of Architecture and Planning, Autonomous vehicles, Imaging, Computer science and technology, Media LabQ&amp;A: How Twitter explains the 2016 election http://news.mit.edu/2016/how-twitter-explains-the-2016-election-0926
“Electome” project charts the national conversation in unique detail.Mon, 26 Sep 2016 00:00:07 -0400Peter Dizikes | MIT News Officehttp://news.mit.edu/2016/how-twitter-explains-the-2016-election-0926<p><em>During an intense U.S. presidential campaign, millions of people are chatting about the election every day on Twitter. MIT is studying them. More precisely, the Laboratory for Social Machines, part of the MIT Media Lab, has launched a project called the Electome that charts Twitter in unique detail. Now the project has joined forces with the Commission on Presidential Debates — the first debate is tonight — to provide journalists with a “dashboard” summarizing Twitter use during the debates. MIT News talked to three key Electome researchers about their work: Deb Roy, director of the Laboratory for Social Machines and chief media scientist for Twitter; William Powers, long-time journalist, author, and now research scientist for the Electome project; and Russell Stevens, deployment lead for the Electome project. This interview has been edited for length.&nbsp; </em></p>
<p><strong>Q:</strong> What is the Electome and how does it work?</p>
<p><strong>Deb Roy:</strong> It begins with taking two data sources: 30 English news sources and the fire hose of tweets from Twitter. We designed machine-learning algorithms that classify both the news stories and the tweets according to a taxonomy of topics on the U.S. election. It’s creating an organized view of how those two streams intersect or diverge, if there are systematic differences of news coverage versus what’s being discussed on Twitter. The overall goal here is to leverage this social signal, which is now of central importance in this election cycle, and systematically understand that signal’s relationship to the news media, and ideally create some sort of feedback loop.</p>
<p><strong>William Powers:</strong> The debate commission is trying to solve this same problem. There is this new public sphere with all these voices, and yet it’s very hard to get your arms around it, precisely because there are so many voices. We had a long conversation [with the commission] over the winter and [then] built this dashboard.</p>
<p><strong>Q:</strong> In that case, how would you like to have journalists use this tool during the debates?</p>
<p><strong>Russell Stevens:</strong> We prefer it to be used as a bit of a step-back machine. We’re working to provide this data in almost real-time, but we’ve thought from the beginning it works better with a little perspective about how the conversation changed because of the debates. We would like journalists to use this to think about what the candidates talked about during the debates and how people in the wilds of Twitter have responded to that in their own conversations — to see how that discussion has changed.</p>
<p><strong>Q:</strong> During the campaign as a whole, what are some cases in which the conversation on Twitter has diverged notably from mainstream news media coverage or public perception of the issues?</p>
<p><strong>Roy:</strong> During the vice-presidential selection process there was a two-week period [when] a third of all stories across all leading news sources were about [that process]. In the same two-week period, we found about 3 percent of tweets were about the vice-presidential issue. The journalists were literally an order of magnitude more interested than the millions of people on Twitter.</p>
<p><strong>Stevens:</strong> Ethics — which is how we [classify topics such as] email and campaign finance — has been covered much more in the media than it’s been talked about [on Twitter]. That is a pretty clear trend from early on. … We [also] found how much the election is not about “the economy, stupid,” in the classic [Bill] Clinton phrase. Issues like national security, foreign policy, immigration, guns, and racial issues are being spoken about [more], relative to economic issues, the budget, taxation, jobs, and even income inequality.</p>
<p><strong>Roy:</strong> A search of the term “birther” [shows] a spike [in tweets] when Trump held a press conference [on the subject]. What’s interesting here, from our automatic coding, is that the majority of the spike comes from race-related tweets. It makes it quantifiable what the reaction is. … Although Trump is careful not to talk about President Obama’s birth place in racial terms, the public reaction to his comments on Twitter is overwhelmingly racial.</p>
<p><strong>Q:</strong> Are there any other surprises about the election you’ve seen due to the Electome project?</p>
<p><strong>Stevens:</strong> There’s a very interesting gender breakdown in terms of who’s been tweeting about the election, and it’s running very strongly male. What would classically be considered social issues within the election … is very strongly dominated by racial issues, immigration, and guns, and the two gender-oriented issues of LGBT issues and abortion rights have indexed very, very low relative to the other issues. It’s interesting, although I think we have to be careful about attributing causality to this.</p>
<p><strong>Powers:</strong> I have three surprises. One is the changing nature of influence. If Twitter is some kind of indicator of where influence is going in the digital world, [then] people who are not famous [according to conventional terms] are having influence. Second, as a long-time Washington journalist, the prominence from early 2015 of foreign policy and national security was a surprise to me. Third, it is interesting in the data to see the way events outside the election bubble change what’s happening in the election bubble. We did an analysis of the Orlando massacre, and the election issues that were affected — &nbsp;guns, immigration, LGBT, terrorism — took a big jump [in volume of Twitter discussion]. The membrane between politics and the rest of the world is, in a way, more permeable than we had thought.</p>
The Laboratory for Social Machines, part of the MIT Media Lab, has launched a project called the Electome that charts Twitter in unique detail. Image: Jose-Luis Olivares/MITVoting and elections, Algorithms, Media Lab, Big data, Data, Research, Machine learning, Social media, Twitter, Politics, Technology and society, School of Architecture and PlanningFour MIT professors named inaugural Faculty Scholarshttp://news.mit.edu/2016/four-mit-professors-named-inaugural-faculty-scholars-0922
New program from the Howard Hughes Medical Institute, Simons Foundation, and Bill and Melinda Gates Foundation supports early-career scientists.Thu, 22 Sep 2016 18:18:01 -0400MIT News Officehttp://news.mit.edu/2016/four-mit-professors-named-inaugural-faculty-scholars-0922<p>The Howard Hughes Medical Institute (HHMI), the Simons Foundation, and the Bill and Melinda Gates Foundation have announced that MIT faculty members Ed Boyden, Jacquin Niles, Matthew Vander Heiden, and Feng Zhang have been selected as Faculty Scholars. They are among 84 early career scientists from 43 institutions across the United States who are being recognized for their “great potential to make unique contributions to their field.”</p>
<p>This is the first collaboration between HHMI, the Simons Foundation and the Bill and Melinda Gates Foundation. The philanthropies created the new Faculty Scholars Program in response to “growing concern about the significant challenges that early-career scientists are facing.”</p>
<p>Scientists reviewed and evaluated more than 1,400 applicants on their potential for significant research productivity and originality, as judged by their doctoral and postdoctoral work, results from their independent research program, and their future research plans. Through the new program, the three philanthropies will spend about $83 million over five years to support the first cohort of scholars, with each one receiving between $600,000 and $1.8 million to support research endeavors.</p>
<p>The four MIT professors were selected among eligible faculty at more than 220 institutions:</p>
<p><a href="http://mcgovern.mit.edu/principal-investigators/ed-boyden" target="_blank">Ed Boyden</a> (HHMI-Simons Faculty Scholar), a professor of biological engineering and brain and cognitive sciences at MIT and a member of MIT’s Media Lab and McGovern Institute for Brain Research, plans to expand his lab’s toolbox for analyzing and engineering brain circuits and other complex biological systems. Most recently, Boyden developed a strategy called <a href="http://mcgovern.mit.edu/news/videos/video-expansion-microscopy/" target="_blank">expansion microscopy</a> to visualize the nanoscale structure of the brain and other tissues.</p>
<p><a href="https://be.mit.edu/directory/jacquin-c-niles" target="_blank">Jacquin Niles</a> ’94, ’01 PhD (HHMI-Gates Faculty Scholar), associate professor of engineering in the Department of Biological Engineering, plans to expand his efforts to eliminate malaria by re-engineering the parasite into a drug delivery vehicle. Niles studies functional genetics in the malarial pathogen <em>Plasmodium falciparum, </em>as well as pathogen-host interactions. He’s working toward a clearer understanding of the parasite and disease to provide the scientific foundation for new malarial diagnostics, treatments, and prevention/elimination strategies.</p>
<p><a href="https://biology.mit.edu/people/matthew_vander_heiden" target="_blank">Matthew Vander Heiden</a> (HHMI-Faculty Scholar), an associate professor of biology at the Koch Institute for Integrative Cancer Research at MIT uses mouse models to study cancer cell metabolism. Vander Heiden is working to identify critical steps in metabolic pathways, such as the breakdown of glucose and the production of basic subunits of DNA, that may lead to new cancer therapies.</p>
<p><a href="https://mcgovern.mit.edu/principal-investigators/feng-zhang" target="_blank">Feng Zhang</a> (HHMI-Simons Faculty Scholar), an associate professor of biological engineering and brain and cognitive sciences at MIT, an investigator at the McGovern Institute for Brain Research, and a core member of the Broad Institute of MIT and Harvard, is developing tools to better understand nervous system function and disease. Zhang was a pioneer in the development of <a href="http://mcgovern.mit.edu/news/videos/genome-editing-with-crispr-cas9/" target="_blank">CRISPR-Cas9</a>, a powerful genome-editing technology with many applications in biomedical research.</p>
<p>“This program will provide these scientists with much needed flexible resources so they can follow their best research ideas,” said HHMI Vice President and Chief Scientific Officer David Clapham.</p>
Clockwise from top left: MIT professors Ed Boyden, Jacquin Niles, Feng Zhang, and Matthew Vander Heiden have been named HHMI Faculty Scholars.Photos: Bryce Vickmark, Kent DaytonBrain and cognitive sciences, Biological engineering, McGovern Institute, Media Lab, Koch Institute, Broad Institute, School of Science, School of Architecture and Planning, School of Engineering, Awards, honors and fellowships, FacultyMedia Lab conference addresses gender bias, diversity, and inclusion in STEMhttp://news.mit.edu/2016/media-lab-conference-addresses-gender-bias-diversity-inclusion-in-stem-0914
Megan Smith ’86, SM ’88 and other thought leaders offer advice to students on how women in STEM fields can develop skills for navigating life and work.
Wed, 14 Sep 2016 18:03:01 -0400MIT Media Labhttp://news.mit.edu/2016/media-lab-conference-addresses-gender-bias-diversity-inclusion-in-stem-0914<p>Not that many years ago, <a href="https://www.whitehouse.gov/administration/eop/ostp/about/leadershipstaff/smith" target="_blank">Megan Smith</a> ’86, SM ’88 was poised to embark on a career in technology, a field long dominated by white men. Now, the MIT alumna is the United States’ Chief Technology Officer, and she’s determined to tamp down gender bias and step up diversity, especially in STEM.</p>
<p>“Our biggest challenge in tech and innovation is that we think certain people do it and certain people don’t,” Smith told an audience of some 250 attendees, predominantly women, at the Sept. 9 MIT Media Lab conference, called <a href="http://www.media.mit.edu/events/no-permission/overview" target="_blank">No Permission, No Apology</a>. The White House CTO, who was previously a Google executive, said that after centuries of discrimination, we’re in an era of unconscious or implicit bias. “We’re not aware we’re doing this, or it’s institutionalized. So once we see it, we should fix it. We are capable of dealing with everything we set our minds to and prioritize; we just haven’t made that a priority.” She added that everyone stands to gain: “In addition to the inclusion of all because it’s the right thing to do, it’s also the best thing to do in terms of innovation and economics.”&nbsp;</p>
<p>Smith’s <a href="https://civic.mit.edu/blog/erhardt/no-permission-no-apology-opening-keynote-by-megan-smith" target="_blank">keynote speech</a> clearly resonated among the crowd of prospective Program in Media Arts and Sciences (<a href="http://www.media.mit.edu/mas/" target="_blank">MAS</a>) candidates from Wellesley and Spelman colleges, and the Meyerhoff Scholars Program at the University of Maryland, Baltimore County (UMBC) as well as undergraduate and graduate students from across MIT. Aliyah Smith, a UMBC sophomore who’s majoring in mechanical engineering, said after hearing Megan Smith’s stories about women who pioneered scientific and technological advances, she’s even more motivated to contribute and be recognized now. “I was very inspired to hear about the work that other women have done in the past. It’s always good to remember that women have not all of a sudden become interested in technology; we just haven’t been acknowledged.”</p>
<p><strong>Entrenched inequities</strong></p>
<p>But in spite of efforts over the past decade to attract more women and people of color to careers in science, technology, engineering, and mathematics (STEM), the workforce is no more diverse these days than it was in 2001. That’s according to recent <a href="http://changetheequation.org/solving-diversity-dilemma" target="_blank">research</a> by Change the Equation, a coalition of Fortune 500 firms, and the 2016 <a href="http://www.usnews.com/news/articles/2016-05-17/the-new-stem-index-2016" target="_blank">US News/Raytheon STEM Index</a>, which also shows that by most measures gender and racial gaps remain entrenched. Meanwhile, a <a href="http://klenow.com/HHJK.pdf" target="_blank">study</a> by the University of Chicago and Stanford University released in August this year highlights the economic and creative benefits for industry of diversifying talent.</p>
<p>Such findings are what prompted this first such event at the Media Lab. “The essence of the conference is community building,” said Monica Orta, assistant director of diversity and student support at the MIT Media Lab. “It isn't enough to create access to a space; we also have to make that space one in which all members can thrive. The conference seeks to shine a light, and build on the ways we can all bring each other up.”</p>
<p><strong>Breaking down ideas</strong></p>
<p>The daylong conference offered a slate of <a href="http://www.media.mit.edu/events/no-permission/breakouts" target="_blank">sessions</a> including breakouts on succeeding in grad schools and beyond, steering startups, developing financing strategies, overcoming doubt, and counteracting "bystander dilemma."</p>
<p><a href="http://www.wellesley.edu/provost/staff/chapman" target="_blank">Robbin Chapman</a> led a discussion on outsmarting "imposter syndrome." She’s associate provost and academic director of diversity and inclusion, and a lecturer in the Department of Education at Wellesley College. Chapman, who earned her MS and PhD degrees in electrical engineering and computer science at MIT, said that since she got her doctorate in 2006, there are still differential outcomes in STEM, “And we need to keep thinking together about how to remove or address barriers, and find pathways that circumvent barriers. We can tap into new institutional structures that can help offset the fact that a group lacks privilege.”&nbsp;</p>
<p><strong>Men as allies</strong></p>
<p>There was a session on how men can effectively help bring gender balance to the technology industry. <a href="https://laurenkinseyblog.wordpress.com/" target="_blank">Lauren Kinsey</a>, an advocate for gender equity in technology, led the discussion, citing research showing the benefits of gender balance, and told participants that “men can use the leverage they have to help change the equation.” That’s something Rahul Bhargava has been trying to do in his career. A research scientist in the Center for Civic Media at the Media Lab, Bhargava said that as a facilitator and a teacher he’s now looking at the readings he assigns, “making sure I’m not leaving out a bunch of voices, and trying to ensure the examples I use are inclusive. And, as Megan Smith said in her keynote, making sure I’m not revising history in some way based on what I’ve been exposed to but actually digging to find those voices that aren’t usually highlighted.” Bhargava also wrote a <a href="https://civic.mit.edu/blog/rahulb/no-permission-no-apology-designing-for-the-other-panel" target="_blank">live blog</a> about another session, which was about overcoming the “design default” that overlooks diverse audiences.</p>
<p>“Diversifying spaces is an intentional act,” said Orta. “By and large, people's social and professional circles reflect their own identities. When we are tasked with referrals or spreading the word, we are tapping into a network that looks, and often thinks, like we do. To stretch ourselves beyond that takes work. This is true for men in spaces that were created with only them in mind. Being cognizant of the how to pull others in and the ways in which we make a space more or less welcoming has a huge impact.”</p>
<p><strong>Influence, persuasion, and presence</strong></p>
<p>“You need mentors, advocates, connectors, and accountability partners who are familiar with your work and help you get it done,” Chapman said. She recommended developing a “thrive mosaic,” a career-long, evolving network of personal resources. You need to build concrete support in ways that will help you become successful, within academia and elsewhere. That mosaic maps to anything you’re going to do anywhere.”</p>
<p>And, establish your presence, advised <a href="http://mindspringmetrodc.com/about-us/our-team/" target="_blank">Denise Minor</a>. Before she became senior partner at MindSpring Metro DC, she’d worked 28 years as a special agent with the FBI. In the conference session she led, Minor recalled her first day on the job with the agency: “There were 12 old guys, and they said. ‘Oh, you want to be a man or something? Why are you an FBI agent?’ Now, if I had let their perceptions of me cloud me, then I would’ve thought, ‘Oh, maybe I shouldn’t be here.’ Instead, I said to them, ‘You know what, the FBI really needs a different perspective — a female perspective, and an African American perspective. I bring something to the table that you need. And if you guys are all smart enough, you’ll use my abilities and work with me.’” Minor closed the session by urging the young women to forget the internal voices that cause self-doubt, and turn off the mind chatter that undermines self-confidence.</p>
<p>This was the first No Permission, No Apology conference at the Media Lab, but Orta said it’s not the last word on the long-standing issues of gender and race equity in STEM. “We want to meet the needs of the community by working with our students to bring their&nbsp;ideas about how to address issues of diversity to life.”</p>
Lauren-Jenay Kelley (foreground) was among students attending the "No Permission, No Apologies" event at the MIT Media Lab, which addressed diversity, gender bias, and inclusion in STEM.Photo: Mim AdkinsSpecial events and guest speakers, STEM education, Media Lab, Alumni/ae, Technology and society, School of Architecture and Planning, Women, Women in STEM, Diversity, Diversity and inclusionRamesh Raskar awarded $500,000 Lemelson-MIT Prizehttp://news.mit.edu/2016/ramesh-raskar-awarded-lemelson-mit-prize-0913
Imaging scientist and inventor sets sights on launching peer-to-peer invention platforms for global impact.Tue, 13 Sep 2016 10:00:52 -0400Lemelson-MIT Programhttp://news.mit.edu/2016/ramesh-raskar-awarded-lemelson-mit-prize-0913<p>Ramesh Raskar, founder of the Camera Culture research group at the <a href="https://www.media.mit.edu/" target="_blank">MIT Media Lab</a> and associate professor of media arts and sciences at MIT, is the recipient of the 2016 $500,000 <a href="http://lemelson.mit.edu/prize" target="_blank">Lemelson-MIT Prize</a>. Raskar is the co-inventor of radical imaging solutions including femtophotography, an ultra-fast imaging system that can see around corners; low-cost eye-care solutions for the developing world; and a camera that allows users to read pages of a book without opening the cover. Raskar seeks to catalyze change on a massive scale by launching platforms that empower inventors to create solutions to improve lives globally.</p>
<p>Raskar has dedicated his career to linking the best of the academic and entrepreneurial worlds with young engineers, igniting a passion for impact inventing. He is a pioneer in the fields of imaging, computer vision and machine learning and his novel imaging platforms offer an understanding of the world that far exceeds human ability. Raskar has mentored more than 100 students, visiting students, interns, and postdocs, who, with his guidance and support, have been able to kick-start their own highly successful careers.</p>
<p>“Raskar is a multi-faceted leader as an inventor, educator, change maker and exemplar connector,” said Stephanie Couch, executive director of the Lemelson-MIT Program. “In addition to creating his own remarkable inventions, he is working to connect communities and inventors all over the world to create positive change.”</p>
<p>The Lemelson-MIT Prize honors outstanding mid-career inventors improving the world through technological invention and demonstrating a commitment to mentorship in science, technology, engineering, and mathematics (STEM). The prize is made possible through the support of The Lemelson Foundation, the world’s leading funder of invention in service of social and economic change. Over the next three years, Raskar will be investing a portion of the prize money to support the development of young inventors.</p>
<p>“We are thrilled to honor Ramesh Raskar, whose breakthrough research is impacting how we see the world,” said Dorothy Lemelson, chair of The Lemelson Foundation. “Ramesh’s femtophotography work not only has the potential to transform industries ranging from internal medicine to transportation safety, it is also helping to inspire a new generation of inventors to tackle the biggest problems of our time.”</p>
<div class="cms-placeholder-content-video"></div>
<p><strong>Making the invisible visible</strong></p>
<p>In 2012, Raskar co-created femtophotography, an advanced form of photography allowing cameras to see around corners. The technology, currently in development for commercialization, uses ultrafast imaging to capture light at 1 trillion frames per second, allowing the camera to create slow motion videos of light in motion. Raskar and his team have received significant funding from sponsors including the U.S. Defense Advanced Research Projects Agency<b> </b>(DARPA), the National Science Foundation, and MIT to further develop the idea of using "scattered light imaging" to see around corners.</p>
<p>Potential future applications include: avoiding car collisions at blind spots; detecting survivors in fire and rescue situations; and performing endoscopy and medical imaging to eliminate the need of an X-ray. Raskar is continuing this research to make the seemingly impossible possible — from reading a book without opening the cover to capturing images of out-of-sight objects using sound waves.</p>
<p><strong>A vision for improved eye-care in the developing world</strong></p>
<p>Raskar is the co-founder of <a href="http://www.eyenetra.com/" target="_blank">EyeNetra</a>, an inexpensive, disruptive eye-care platform that spun out of Media Lab research. EyeNetra enables on-demand eye testing in remote locations via a hand-held technology that snaps onto a mobile device. When looking into the binocular the user is provided with interactive cues to rapidly calculate a prescription for eyeglasses. The technology was created to eliminate the need for expensive diagnostic tools in the developing world; the young company has performed eye-tests for hundreds of thousands of subjects and is currently active in the U.S., Brazil, and India.</p>
<p>Raskar’s team has also worked on many areas of preventable blindness, low vision, and diagnostics at MIT. In 2013, he and his colleagues launched <a href="http://lvpmitra.com/" target="_blank">LVP-MITRA</a> in Hyderabad, India, a center where hundreds of young inventors have been co-inventing next generation screening, diagnostic, and therapeutic tools for eye-care.</p>
<p><strong>Empowering social impact among youth and entrepreneurs</strong></p>
<p>Raskar is also the founder of the <a href="http://www.redx.io/emerging-worlds-1/#emerging-worlds" target="_blank">Emerging Worlds initiative</a>, a year-round effort focused on solving some of the world’s most pressing problems and impacting billions worldwide. This initiative, based at MIT, links corporate members, government organizations, educational institutes, and venture partners. The members — MIT researchers, young innovators, and entrepreneurs — work in very specific integrated ecosystems to spot problems, probe solutions, grow adoption, and scale the deployment.</p>
<p>This methodology was recently used at the <a href="https://en.wikipedia.org/wiki/Kumbathon" target="_blank">Kumbhathon</a> sandbox for innovations at Kumbh Mela, a gathering of 30 million people, and at the <a href="https://www.digitalimpactsquare.com/" target="_blank">Digital Impact Square</a>, a multi-million-dollar living lab and open co-innovation center. Raskar has mentored several teams that span crowd-steering via use of cell tower data to display heat maps of crowd movements, stations to test vital statistics using portable instruments, and an analytics-based system to detect impending epidemic outbreaks in real-time.</p>
<p><strong>Launching co-innovation pathways for young inventors</strong></p>
<p>“Everyone has the power to solve problems and through peer-to-peer co-invention and purposeful collaboration, we can solve problems that will impact billions of lives,” Raskar says. He plans to use a portion of the Lemelson-MIT Prize money to launch a new effort using peer-to-peer invention platforms that offer new approaches for helping young people in multiple countries to co-invent in a collaborative way. Visit <a href="http://redx.io/">redx.io</a> to learn more or to apply.</p>
<p>Raskar will speak at <a href="http://events.technologyreview.com/emtech/16/" target="_blank">EmTech MIT</a>, the annual conference on emerging technologies hosted by <a href="http://www.technologyreview.com/" target="_blank"><i>MIT Technology Review</i></a> at the MIT Media Lab on Tuesday, Oct. 18.</p>
<p><strong>Seeking nominees for 2017 $500,000 Lemelson-MIT Prize</strong></p>
<p>The Lemelson-MIT Program is now seeking nominations for the 2017 $500,000 Lemelson-MIT Prize. Please contact the Lemelson-MIT Program at <a href="mailto:awards-lemelson@mit.edu?subject=Lemelson-MIT%20Prize">awards-lemelson@mit.edu</a> for more information or visit the <a href="http://lemelson.mit.edu/prize" target="_blank">prize website</a>.</p>
<p>The Lemelson-MIT Program celebrates outstanding inventors and inspires young people to pursue creative lives and careers through invention. Jerome H. Lemelson, one of the most prolific inventors in U.S. history, and his wife Dorothy founded the Lemelson-MIT Program at MIT in 1994. It is funded by The Lemelson Foundation and administered by the School of Engineering at MIT, an institution with a strong ongoing commitment to creating meaningful opportunities for K-12 STEM education.</p>
<p>Based in Portland, Oregon, <a href="http://lemelson.org" target="_blank">The Lemelson Foundation</a> uses the power of invention to improve lives. Inspired by the belief that invention can solve many of the biggest economic and social challenges of our time, the foundation helps the next generation of inventors and invention-based businesses to flourish. The Lemelson Foundation was established in the early 1990s by prolific inventor Jerome Lemelson and his wife Dorothy. To date, the foundation has made grants totaling more than $200 million in support of its mission.</p>
Imaging scientist and social impact inventor Ramesh Raskar of MIT is the 2016 recipient of the $500,000 Lemelson-MIT Prize.Photo: Len RubensteinAwards, honors and fellowships, Research, Faculty, Lemelson-MIT, Invention, Computer science and technology, Imaging, Photography, Computer vision, Machine learning, Media Lab, School of Architecture and PlanningJudging a book through its coverhttp://news.mit.edu/2016/computational-imaging-method-reads-closed-books-0909
New computational imaging method identifies letters printed on first nine pages of a stack of paper.Fri, 09 Sep 2016 05:00:00 -0400Larry Hardesty | MIT News Officehttp://news.mit.edu/2016/computational-imaging-method-reads-closed-books-0909<p>MIT researchers and their colleagues are designing an imaging system that can read closed books.</p>
<p>In the latest issue of <em>Nature Communications</em>, the researchers describe a prototype of the system, which they tested on a stack of papers, each with one letter printed on it. The system was able to correctly identify the letters on the top nine sheets.</p>
<p>“The Metropolitan Museum in New York showed a lot of interest in this, because they want to, for example, look into some antique books that they don’t even want to touch,” says Barmak Heshmat, a research scientist at the MIT Media Lab and corresponding author on the new paper. He adds that the system could be used to analyze any materials organized in thin layers, such as coatings on machine parts or pharmaceuticals.</p>
<div class="cms-placeholder-content-video"></div>
<p>Heshmat is joined on the paper by Ramesh Raskar, an associate professor of media arts and sciences; Albert Redo Sanchez, a research specialist in the Camera Culture group at the Media Lab; two of the group’s other members; and by Justin Romberg and Alireza Aghasi of Georgia Tech.</p>
<p>The MIT researchers developed the algorithms that acquire images from individual sheets in stacks of paper, and the Georgia Tech researchers developed the algorithm that interprets the often distorted or incomplete images as individual letters. “It’s actually kind of scary,” Heshmat says of the letter-interpretation algorithm. “A lot of websites have these letter certifications [<a href="https://en.wikipedia.org/wiki/CAPTCHA">captchas</a>] to make sure you’re not a robot, and this algorithm can get through a lot of them.”</p>
<p><strong>Timing terahertz</strong></p>
<p>The system uses terahertz radiation, the band of electromagnetic radiation between microwaves and infrared light, which has several advantages over other types of waves that can penetrate surfaces, such as X-rays or sound waves. Terahertz radiation has been widely researched for use in security screening, because different chemicals absorb different frequencies of terahertz radiation to different degrees, yielding a distinctive frequency signature for each. By the same token, terahertz frequency profiles can distinguish between ink and blank paper, in a way that X-rays can’t.</p>
<p>Terahertz radiation can also be emitted in such short bursts that the distance it has traveled can be gauged from the difference between its emission time and the time at which reflected radiation returns to a sensor. That gives it much better depth resolution than ultrasound.</p>
<p>The system exploits the fact that trapped between the pages of a book are tiny air pockets only about 20 micrometers deep. The difference in refractive index — the degree to which they bend light — between the air and the paper means that the boundary between the two will reflect terahertz radiation back to a detector.</p>
<p>In the researchers’ setup, a standard terahertz camera emits ultrashort bursts of radiation, and the camera’s built-in sensor detects their reflections. From the reflections’ time of arrival, the MIT researchers’ algorithm can gauge the distance to the individual pages of the book.</p>
<p><strong>True signals</strong></p>
<p>While most of the radiation is either absorbed or reflected by the book, some of it bounces around between pages before returning to the sensor, producing a spurious signal. The sensor’s electronics also produce a background hum. One of the tasks of the MIT researchers’ algorithm is to filter out all this “noise.”</p>
<p>The information about the pages’ distance helps: It allows the algorithm to hone in on just the terahertz signals whose arrival times suggest that they are true reflections. Then, it relies on two different measures of the reflections’ energy and assumptions about both the energy profiles of true reflections and the statistics of noise to extract information about the chemical properties of the reflecting surfaces.</p>
<p>At the moment, the algorithm can correctly deduce the distance from the camera to the top 20 pages in a stack, but past a depth of nine pages, the energy of the reflected signal is so low that the differences between frequency signatures are swamped by noise. Terahertz imaging is still a relatively young technology, however, and researchers are constantly working to improve both the accuracy of detectors and the power of the <a href="http://news.mit.edu/2016/microlasers-phase-locking-arrays-0613">radiation sources</a>, so deeper penetration should be possible.</p>
<p>"So much work has gone into terahertz technology to get the sources and detectors working, with big promises for imaging new and exciting things,” says Laura Waller, an associate professor of electrical engineering and computer science at the University of California at Berkeley. “This work is one of the first to use these new tools along with advances in computational imaging to get at pictures of things we could never see with optical technologies. Now we can judge a book through its cover!"</p>
A new imaging system can read closed books. “The Metropolitan Museum in New York showed a lot of interest in this, because they want to, for example, look into some antique books that they don’t even want to touch,” says Barmak Heshmat, a research scientist at the MIT Media Lab.Photo courtesy of Barmak Heshmat.Research, School of Architecture and Planning, Computer science and technology, Media Lab, Imaging, Cyber security, AlgorithmsEleven from MIT named to the 2016 TR35http://news.mit.edu/2016/technology-review-tr35-under-35-0825
Nearly one-third of MIT Technology Review’s top innovators under the age of 35 have a connection to MIT.Thu, 25 Aug 2016 17:30:00 -0400Jay London | MIT Alumni Associationhttp://news.mit.edu/2016/technology-review-tr35-under-35-0825<p><em>MIT Technology Review</em>&nbsp;announced its annual list of the&nbsp;<a href="https://www.technologyreview.com/lists/innovators-under-35/2016/">top 35 innovators under the age of 35</a> — the TR35 — on August 23, and more than one-third of the honorees&nbsp;have&nbsp;a connection to MIT. The 2016 group features three current faculty and researchers and at least eight MIT alumni.</p>
<p>According to&nbsp;<em>Tech Review</em>, the TR35 honors young innovators, disrupters, and dreamers who are pursuing medical breakthroughs, refashioning energy technologies, making computers more useful, and engineering cooler electronic devices. The 2016 list is split into five categories: Entrepreneurs, Inventors, Humanitarians, Pioneers, and Visionaries.</p>
<p>Honorees with MIT connections include:</p>
<p><strong>Dinesh Baradia </strong>— Inventor; current postdoc in the MIT Computer Science and Artificial Intelligence Laboratory</p>
<p>“Dinesh Bharadia invented a telecommunications technology that … found a way to simultaneously transmit and receive data on the same frequency.”</p>
<p><strong>Kevin Esvelt </strong>— Visionary; assistant professor in the MIT Media Lab</p>
<p>“Esvelt’s Take: No gene drive able to spread globally should be released. Or even tested. Scientists need to disclose their plans. His Solution: He’s designed safer gene drives that can be controlled.”</p>
<p><strong>Sonia Vallabh</strong> — Humanitarian; Broad Institute of MIT and Harvard</p>
<p>“[Sonia] learned she has a genetic mutation that causes a deadly brain disease. She and her husband have published research showing a possible pathway to a treatment.”</p>
<p><strong>Muyinatu Lediju Bell '06 </strong>— Inventor; Johns Hopkins University</p>
<p>“Bell is working to improve another type of noninvasive medical imaging technique. Called photoacoustic imaging, it uses a combination of light and sound to produce images of tissues in the body.”</p>
<p><strong>Adam Bry '12 </strong>— Inventor; Skydio</p>
<p>“We’re building a drone for consumers that understands the physical world, reacts to you intelligently, and can use that information to make decisions,” Bry says.</p>
<p><strong>Ying Diao SM '10, PhD '12 </strong>— Pioneer; University of Illinois</p>
<p>“Ying Diao is creating printing techniques that bring order to the otherwise chaotic assembly of plastic molecules. She has made organic solar cells with double the efficiency of previous ones.”</p>
<p><strong>Jonathan Downey '06 </strong>— Visionary; Airware</p>
<p>“The creator of control software for drones has foreseen the advantages of autonomous aircraft for years. … In 2015, Airware launches several products intended to help big companies use drones.”</p>
<p><strong>Ehsan Hoque PhD '13 </strong>— Humanitarian; University of Rochester</p>
<p>“Can computers teach us to be our best selves? Ehsan Hoque believes so. He has created two computer systems that train people to excel in social settings.”</p>
<p><strong>Maithilee Kunda '06 </strong>— Visionary; Vanderbilt University</p>
<p>“What if you had an AI system that used data made up entirely of images and reasoned only using visual operations, like rotating images around or combining images together?” Kunda asks.</p>
<p><strong>Stephanie Lampkin MBA '13 </strong>— Entrepreneur; Blendoor</p>
<p>“Lampkin coded Blendoor, a job-search platform that hides the candidates’ names and photos during the initial stages of the process. So far more than 5,000 people have signed up.”</p>
<p><strong>Jean Yang SM '10, PhD '15 </strong>— Visionary; Carnegie Mellon University</p>
<p>“Yang created Jeeves, a programming language with privacy baked in. With Jeeves, developers don’t have to scrub personal information from their features. …Yang’s code does it automatically.”</p>
<p>For more details on this year's group of honorees, visit the <a href="https://www.technologyreview.com/lists/innovators-under-35/2016/" style="padding: 0px; margin: 0px; color: rgb(0, 0, 0);">TR35 section</a>&nbsp;of <em>MIT Technology Review.</em></p>
Eleven faculty, staff, and alumni were named to the MIT Technology Review 2016 TR35. Images courtesy of MIT Technology Review.Awards, honors and fellowships, Faculty, Graduate, postdoctoral, Alumni/aeProfessor Emeritus Seymour Papert, pioneer of constructionist learning, dies at 88http://news.mit.edu/2016/seymour-papert-pioneer-of-constructionist-learning-dies-0801
World-renowned mathematician, learning theorist, and educational-technology visionary was a founding faculty member of the MIT Media Lab.
Mon, 01 Aug 2016 14:30:01 -0400MIT Media Labhttp://news.mit.edu/2016/seymour-papert-pioneer-of-constructionist-learning-dies-0801<p>Seymour Papert, whose ideas and inventions transformed how millions of children around the world create and learn, died Sunday at his home in East Blue Hill, Maine. He was 88.&nbsp;</p>
<p>Papert’s career traversed a trio of influential movements: child development, artificial intelligence, and educational technologies. Based on his insights into children’s thinking and learning, Papert recognized that computers could be used not just to deliver information and instruction, but also to empower children to experiment, explore, and express themselves. The central tenet of his <a href="http://www.papert.org/articles/SituatingConstructionism.html" target="_blank">Constructionist</a> theory of learning is that people build knowledge most effectively when they are actively engaged in constructing things in the world. As early as 1968, Papert introduced the idea that computer programming and debugging can provide children a way to think about their own thinking and learn about their own learning.</p>
<p>“With a mind of extraordinary range and creativity, Seymour Papert helped revolutionize at least three fields, from the study of how children make sense of the world, to the development of artificial intelligence, to the rich intersection of technology and learning,” says MIT President L. Rafael Reif. “The stamp he left on MIT is profound. Today, as MIT continues to expand its reach and deepen its work in digital learning, I am particularly grateful for Seymour’s groundbreaking vision, and we hope to build on his ideas to open doors to learners of all ages, around the world.”</p>
<p>Papert’s life straddled several continents. He was born in 1928 in Pretoria, South Africa, and went on to study at the University of the Witwatersrand in South Africa, where he earned a BA in philosophy in 1949, followed by a PhD in mathematics three years later. He was a leading anti-apartheid activist throughout his university years.</p>
<p>Papert’s studies then took him overseas – first to Cambridge University in England from 1954-1958, where he focused on math research, earning his second PhD, then to the University of Geneva, where he worked with Swiss philosopher and psychologist Jean Piaget, whose theories about the ways children make sense of the world changed Papert’s view of children and learning.</p>
<p>From Switzerland, Papert came to the U.S., joining MIT as a research associate in 1963. Four years later, he became a professor of applied mathematics, and shortly after was appointed co-director of the Artificial Intelligence Lab (which later evolved into the <a href="https://www.csail.mit.edu/" target="_blank">Computer Science and Artificial Intelligence Laboratory</a>, or CSAIL) by its founding director Professor Marvin Minsky. Together, they wrote the 1969 book, “Perceptrons,” which marked a turning point in the field of artificial intelligence.</p>
<p>In 1985, Papert and Minsky joined former MIT President Jerome Wiesner and MIT Professor Nicholas Negroponte to become founding faculty members of the <a href="https://www.media.mit.edu/" target="_blank">MIT Media Lab</a>, where Papert led the Epistemology and Learning research group.</p>
<p>“Seymour often talked poetically, sometimes in riddles, like his famed phrase, ‘you cannot think about thinking without thinking about thinking about something,’” says Negroponte, the Media Lab’s co-founder and first director. “He did not follow rules or run by anybody else’s clock. I would say, in Papertian style, Seymour never needed to do what he said because when he said what he did, it was better.”&nbsp;</p>
<div class="cms-placeholder-content-video"></div>
<p>Papert was among the first to recognize the revolutionary potential of computers in education. In the late 1960s, at a time when computers still cost hundreds of thousands of dollars, Papert came up with the idea for <a href="http://el.media.mit.edu/logo-foundation/what_is_logo/history.html" target="_blank">Logo</a>, the first programming language for children. Children used Logo to program the movements of a “turtle” — either in the form of a small mechanical robot or a graphic object on the computer screen. In his seminal book “Mindstorms: Children, Computers and Powerful Ideas” (1980), Papert argued against “the computer being used to program the child.” He presented an alternative approach in which “the child programs the computer&nbsp;and, in doing so, both acquires a sense of mastery over a piece of the most modern and powerful technology and establishes an intimate contact with some of the deepest ideas from science, from mathematics, and from the art of intellectual model building.”</p>
<p>In collaboration with Sherry Turkle, the Abby Rockefeller Mauzé Professor of the Social Studies of Science and Technology at MIT, Papert explored how childhood objects have a deep influence on how and what children learn. In “Mindstorms,” Papert explained how he “fell in love with gears” as a child, and how he hoped to “turn computers into instruments flexible enough so that many children can each create for themselves something like what the gears were for me.”</p>
<p>Papert was the Cecil and Ida Green Professor of Education at MIT from 1974-1981. In 1985, he began a long and productive collaboration with the LEGO company, one of the first and largest corporate sponsors of the Media Lab. Papert’s ideas served as an inspiration for the LEGO Mindstorms robotics kit, which was named after his 1980 book. In 1989, the LEGO company endowed a chair at the Media Lab, and Papert became the first LEGO Professor of Learning Research. In 1998, after Papert became professor emeritus, the name of the professorship was modified, in his honor, to the LEGO Papert Professorship of Learning Research. The professorship was passed on to Papert’s former student and long-time collaborator, Mitchel Resnick, who continues to hold the chair today.</p>
<p>“For so many of us, Seymour fundamentally changed the way we think about learning, the way we think about children, and the way we think about technology,” says Resnick, who heads the Media Lab’s Lifelong Kindergarten research group.</p>
<p>In the late 1990s, Papert moved to Maine and continued his work with young people there, establishing the <a href="http://learning.media.mit.edu/content/press/clip01.pdf" target="_blank">Learning Barn</a> and the Seymour Papert Institute in 1999. He also set up a <a href="http://stager.org/articles/8bigideas.pdf" target="_blank">Learning Lab</a> at the Maine Youth Center, where he worked to engage and inspire troubled youths who had received little support at home or school, and were grappling with drugs, alcohol, anger, or psychological problems. He was also integral to a Maine <a href="http://www.maine.gov/mlti/about/index.shtml" target="_blank">initiative</a> requiring laptops for all 7th and 8th graders. Following the Maine initiative, Papert joined Negroponte and Alan Kay in 2004 to create the non-profit <a href="http://one.laptop.org/" target="_blank">One Laptop per Child</a> (OLPC), which produced and distributed low-cost, low-power, rugged laptops to the world’s poorest children. The organization produced more than 3 million laptops, reaching children in more than 40 countries. “Each of the laptops has Seymour inside,” says Negroponte.</p>
<p>Papert’s work inspired generations of educators and researchers around the world. He received numerous awards, including a&nbsp;Guggenheim fellowship&nbsp;in 1980, a&nbsp;Marconi International&nbsp;fellowship in 1981, and the Smithsonian Award from&nbsp;<em>Computerworld</em>&nbsp;in 1997. In 2001, <em>Newsweek</em> named him “one of the nation’s 10 top innovators in education.”</p>
<p>“Papert made everyone around him smarter — from children to colleagues — by encouraging people to focus on the big picture and zero in on the powerful ideas,” says CSAIL’s Patrick Winston, who took over as director of the AI Lab in 1972.</p>
<p>In addition to “Mindstorms,” Papert was the author of “The Children’s Machine”<em> </em>(1993) and “The Connected Family: Bridging the Digital Generation Gap”<em> </em>(1996). As an emeritus professor, Papert continued to write many articles and advise governments around the world on technology-based education. In 2006, while in Vietnam for a conference on mathematics education, he suffered a serious brain injury when struck by a motor scooter in Hanoi.</p>
<p>Papert is survived by his wife of 24 years, Suzanne Massie, a Russia scholar with whom he collaborated on the Learning Barn and many international projects; his daughter, Artemis Papert; three stepchildren, Robert Massie IV, Susanna Massie Thomas, and Elizabeth Massie; and two siblings, Alan Papert and Joan Papert. He was previously married to Dona Strauss, Androula Christofides Henriques, and Sherry Turkle.</p>
<p>The Media Lab will host a celebration of the life and work of Seymour Papert in the coming months.</p>
Seymour Papert was a world-renowned visionary in education and a founding faculty member of the MIT Media Lab. Photo: Bob KramerFaculty, Obituaries, Media Lab, Artificial intelligence, Technology and society, K-12 education, STEM education, Computer Science and Artificial Intelligence Laboratory (CSAIL), Education, teaching, academics, online learningForbidden researchhttp://news.mit.edu/2016/forbidden-research-media-lab-0725
MIT Media Lab holds symposium on challenging norms to create positive change.Mon, 25 Jul 2016 17:25:01 -0400MIT Media Labhttp://news.mit.edu/2016/forbidden-research-media-lab-0725<p>The man who has called his leaking of secret government documents an act of “civil disobedience” has taken part in an MIT Media Lab symposium called <a href="http://www.media.mit.edu/events/forbidden/overview" target="_blank">Forbidden Research</a>, about disobedience for social good.</p>
<p>Former National Security Agency (NSA) contractor, turned whistleblower, and current board member of the Freedom of the Press Foundation <a href="https://freedom.press/about/board/edward-snowden" target="_blank">Edward Snowden</a> appeared via video in the July 21 event. He’s been living outside the U.S. since 2013, when he leaked highly-classified NSA documents detailing government surveillance programs. The U.S. Department of Justice charged him with theft and espionage.</p>
<p>Snowden and author/hardware hacker <a href="https://www.bunniestudios.com/" target="_blank">Andrew “bunnie” Huang</a> led the symposium’s opening session, Against the Law: Countering Lawful Abuses of Digital Surveillance. “Law is no substitute for conscience,” Snowden said. “This is not to say laws are bad, this is not to say we don’t want new rules. But there are better guarantees, which can provide greater enforcement of individual rights. Lawful abuse is not a contradiction.”</p>
<p>Just before Snowden spoke, he and Huang published a joint <a href="https://www.pubpub.org/pub/direct-radio-introspection" target="_blank">paper</a> on their work developing technology to help journalists protect their smartphones from being traced and compromised through their radio hardware. The “introspection engine” would monitor a device’s radio activity using a measurement tool in a phone-mounted battery case. Huang, a plaintiff in a First Amendment <a href="https://www.eff.org/files/2016/07/21/1201_complaint.pdf" target="_blank">lawsuit</a> filed the morning of the event, said the basic challenge of the project was how to turn a smartphone into a cyber fortress. “Think of it as a designated driver for the phone. Journalists shouldn’t have to be cryptographers to be safe. I want them to be able to go and meet with their sources, leave the fortress if they have to, and not have to worry about their phones being a vector for their capture.”</p>
<p><strong>Challenges in change</strong></p>
<p>Technology as a tool for change continued in the symposium’s next two sessions, called Messing with Nature. Part 1 focused on genetics at a time when the relatively new gene editing system CRISPR is recasting research in radical ways. The panelists, including Harvard University genetics professor <a href="http://arep.med.harvard.edu/gmc/" target="_blank">George Church</a> and the Media Lab’s <a href="https://www.media.mit.edu/people/esvelt" target="_blank">Kevin Esvelt</a>, targeted complex questions: How do we continue to advance genetic engineering while making the field more open for the sake of safety, ethics, and cautionary vigilance? Who should be responsible for making “god-like” decisions that will ultimately affect our entire future as a society? Part 2 addressed whether the potential for success in using technology to intervene in climate change is “messing” with nature. For instance, researchers are working on technologies for reflecting solar radiation back into space, but what would happen if we deploy them? Who should decide?</p>
<p>Pivoting from science, the next session turned to Islam, women's rights, and global security. Panelist <a href="http://directorsfellows.media.mit.edu/fellow-profiles/alaa-murabit/" target="_blank">Alaa Murabit</a>, a physician, United Nations Sustainable Development Goals Advocate, and founder of The Voice of Libyan Women, said that while the subjects may not appear to be connected to the event’s "forbidden research" billing, there is a link. “Look at genetic engineering and the other things we’ve been talking about, which occur in our daily lives — reproductive rights, or global peace and security — a lot of the challenges are either rooted in the perception of religion or the political manipulation of religion. And no matter how much we research, putting it into practice and policy becomes quite difficult.”</p>
<p><strong>Rule-breaking research</strong></p>
<p>Levels of difficulty were evident in the next session, which explored whether technology can protect children from sexual deviance. Conducting research in this field is virtually impossible due to ethical and legal restrictions. At the same time, though, a better understanding of deviant behavior has the potential to change lives for the good.</p>
<p>The next session took up the challenges of rule-breaking research. Among them: how to live up to the ideals of academic inquiry in the face of business constraints, legal barriers, and ethical concerns. As the conference co-host <a href="https://civic.mit.edu/users/ethanz" target="_blank">Ethan Zuckerman</a>, who directs MIT’s Center for Civic Media, said earlier in the day, “This is about people whose innovations find them pushing the limits — and bumping up against different issues preventing people from answering questions that are deeply important to ask.”</p>
<p>Ethical principles were also a factor in the following session, which homed in on the hacking culture at MIT, not only sharing its funny and impressive moments but also highlighting its value in the education of scientists and engineers. "You could say a hack, the noun, is the creative solution to a problem. The verb, hacking means investigating a problem for its own sake," said presenter <a href="http://www.mpe.mpg.de/~lizinvt/" target="_blank">Liz George</a> '08, who's now a postdoc&nbsp;at&nbsp;the&nbsp;Max Planck Institute&nbsp;for Extraterrestrial Physics. Hacking, she said, is a great piece of engineering that engages the community and abides by a strong code of ethics on a project where you're breaking rules. "And the core motivation is to do something really interesting, something that is clandestine, something that involves a lot on ingenuity and wit."</p>
<p><strong>Rewarding disobedience</strong></p>
<p>The motivation and disobedience that drive hacks can also be a source for good, but it's a "tricky balance," said co-host and Media Lab Director <a href="https://joi.ito.com/" target="_blank">Joi Ito</a> in the closing session of the conference. "It's like in scuba diving — where the air/water barrier, or the getting in and getting out of water, is really hard but being in the water is pretty nice. Inside MIT, we have a lot of trust and can create a lot of interesting things, but when we start to hit the outside world and what we're allowed to do — that's where there isn't much tolerance and room."</p>
<p>And that's why Ito announced at the Forbidden Research symposium that the Media Lab has created a $250,000 Media Lab Disobedience Award&nbsp;funded by LinkedIn co-founder Reid Hoffman. “The prize will go to a person or group engaged in what we believe is excellent disobedience for the benefit of society,” Ito wrote in his <a href="https://medium.com/mit-media-lab/rewarding-disobedience-ae194d9f0785#.2qfdoqjo5" target="_blank">blog post</a>. “The disobedience that we would like to call out is the kind that seeks to change society in a positive way, and is consistent with a set of key principles. The principles include non-violence, creativity, courage, and taking responsibility for one’s actions. The disobedience can be in — but is not limited to — the fields of scientific research, civil rights, freedom of speech, human rights, and the freedom to innovate.”</p>
The MIT Media Lab held a symposium on July 21 to explore the “fuzzy line” between disobedience that helps society and disobedience that doesn't. Andrew “bunnie” Huang (left) and Edward Snowden presented their technology to turn journalists’ smartphones into “cyber fortresses.”Photo: Mim AdkinsMedia Lab, Special events and guest speakers, Research, Center for Civic Media, Technology and society, Religion, Women, Policy, Political science, CRISPR, School of Architecture and Planning, Ethics2016 MIT Media Lab Director’s Fellows announcedhttp://news.mit.edu/2016/mit-media-lab-director%E2%80%99s-fellows-0722
Program blurs the lines between academia and the world.Fri, 22 Jul 2016 09:00:00 -0400MIT Media Labhttp://news.mit.edu/2016/mit-media-lab-director%E2%80%99s-fellows-0722<p>Since the MIT Media Lab launched its <a href="http://directorsfellows.media.mit.edu/" target="_blank">Director's Fellows initiative</a> in 2013, the program has lived up to its mission to attract people with “less-than-traditional backgrounds” outside academia.</p>
<p>This year’s new cohort spans the spectrum: from a world-famous Japanese soccer star&nbsp;to a dancer/choreographer&nbsp;to the creator of Kickstarter; from a documentary maker&nbsp;to a flavor scientist&nbsp;to a women’s rights activist;&nbsp;from a traditional textile maker&nbsp;to a playwright/director&nbsp;to a fashion designer.</p>
<p>As part of their two-year honorary affiliation with the Media Lab, Director’s Fellows connect with faculty and students, both onsite and offsite.&nbsp;Together, fellows pursue&nbsp;<a href="http://directorsfellows.media.mit.edu/projects/" target="_blank">collaborative projects</a>&nbsp;that both deploy Media Lab innovation to tackle real-world problems and connect with real-world opportunities to expand learning and creativity. And they’ll become part of what Media Lab Director <a href="http://www.media.mit.edu/people/joi" target="_blank">Joi Ito</a> envisages as “hybrid popup Media Lab communities around the world.” Ito selects the fellows, and says he foresees their collaborations “as helping us deliver on our goal of increasing diversity and promoting deeper cultural exchange in the context of our ongoing focus on technology, innovation, and creation.”</p>
<div class="cms-placeholder-content-video"></div>
<p>“The program has evolved from its original experimental mindset to an integration and outcome-focused approach, which has allowed us to grow organically within the lab's community,” says Claudia Robaina, who coordinates the Director’s Fellows program. “The quantity and quality of projects — proposed and developed by our fellows in partnership with lab faculty and researchers — continue to expand in both reach and variety.”</p>
<p>The 2016 fellows form an impressive group from a diversity of expertise and regions:</p>
<p><a href="http://www.armitagegonedance.org/karole-armitage/biography" target="_blank">Karole Armitage</a>, artistic director of the New York-based Armitage Gone! Dance Company, is renowned for pushing the boundaries to create contemporary works that blend dance, music, science, and art to engage in philosophical questions about the search for meaning.</p>
<p><a href="http://www.perrychenstudio.com/info/" target="_blank">Perry Chen</a> is a New York City-based artist and creator of the funding platform Kickstarter, where he was CEO through 2013. He was a TED Fellow in 2010, and named to <em>Time </em>magazine’s 2013 list of the 100 most influential people in the world. Chen has exhibited his artwork in New York, Berlin, and Mexico City.</p>
<p><a href="http://www.sheilahayman.com" target="_blank">Sheila Hayman</a> is a British-born BAFTA-winning producer/director of shows for the BBC, Channel 4 UK, and others, who now shoots and edits her own films. Recently, Hayman has moved into digital media and music, with apps, digital projects, and interactive performances for organizations across the world.</p>
<p><a href="http://member.keisuke-honda.com/" target="_blank">Keisuke Honda</a> is a professional soccer player, entrepreneur, and educator. He is active as an angel investor and has also launched a soccer-focused company, Honda Estilo, in Japan. As an educator, he believes in providing opportunities for “local production for local consumption” or independence. In 2016, Honda was named a U.N. Foundation Global Advocate for Youth.</p>
<p><a href="http://www.hosoo-kyoto.com/" target="_blank">Masataka Hosoo</a>, innovator and brand director of HOSOO, a Kyoto-based traditional kimono textile maker, is bringing Nishijin-ori weaving techniques and textiles to the forefront of the design and fashion scenes worldwide. He introduces elements of modern technology and an innovative business acumen to this traditional craft.</p>
<p><a href="http://microbialfoods.org/profile-arielle-johnson-head-research-mad/" target="_blank">Arielle Johnson</a> is a flavor scientist and head of research at the food think tank, MAD. She’s also the resident scientist at restaurant noma in Copenhagen. Johnson collaborates with chefs, designers, engineers, and other scientists to dismantle the boundary between the kitchen and the laboratory.</p>
<p><a href="http://alaamurabit.com/about/" target="_blank">Alaa Murabit</a> is a physician, founder of The Voice of Libyan Women, and a U.N. Global Sustainable Development Goals Advocate. Focused on challenging societal and cultural norms, and utilizing traditional and historical role models, Murabit champions women’s participation in peace processes and conflict mediation.</p>
<p><a href="http://www.madelinesayet.com/" target="_blank">Madeline Sayet</a> is a playwright, director, performer, writer, and educator based in New York and New England. A member of the Mohegan Nation, she explores cultural intersections through original theater productions in an effort to highlight indigenous perspectives. Her work explores the complexities of race, culture, and media.</p>
<p><a href="http://www.celinecelines.com/" target="_blank">Céline Semaan Vernon</a> is a Lebanese-Canadian designer, activist, teacher, and entrepreneur who founded the New York-based fashion firm, Slow Factory. Her background is in art, technology, and information design, and her mission is centered on responsible design, human rights, and open data.</p>
<p>The program’s 30 fellows to date, who’ve come to the lab from across the globe, include prison activist Shaka Senghor, comedian Baratunde Thurston, bionic pop artist Viktoria Modesta, Buddhist monk Tenzin Priyadarshi, social entrepreneur Khalida Brohi, and film director and producer J.J. Abrams. Past collaborations have included the&nbsp;<a href="http://directorsfellows.media.mit.edu/projects/prosthetic-fitting-nairobi/" target="_blank">production</a>&nbsp;of 3-D design software and robotic measurement tools in Kenya,&nbsp;<a href="http://directorsfellows.media.mit.edu/projects/atonement-project/" target="_blank">The Atonement Project</a>&nbsp;for victims and perpetrators of crime, and&nbsp;<a href="http://directorsfellows.media.mit.edu/projects/microculture-a-sxsw-game-about-synthetic-biology-and-community-building/" target="_blank">Microculture</a>, an interactive game that combines synthetic biology and community building. 2014 fellow Katy Croff Bell’s collaboration has led to lab students joining her this month on the Exploration Vessel&nbsp;<a href="http://www.nautiluslive.org/" target="_blank">Nautilus</a>&nbsp;to test new ideas, including the feasibility of an origami-inspired folding structure for marine habitat restoration.</p>
<p>The 2016 Director’s Fellows officially joined the MIT Media Lab on July 20, 2016.</p>
Top row: (l-r) Karole Armitage, Perry Chen, Sheila Hayman. Middle row: (l-r) Keisuke Honda, Masataka Hosoo, Arielle Johnson. Bottom row: (l-r) Alaa Murabit, Madeline Sayet, Céline Semaan Vernon.Photos: MIT Media LabMedia Lab, Awards, honors and fellowships, Collaboration, Technology and societySeeing RNA at the nanoscalehttp://news.mit.edu/2016/rna-nanoscale-brain-0704
Microscopy technique allows scientists to pinpoint RNA molecules in the brain. Mon, 04 Jul 2016 11:00:02 -0400Anne Trafton | MIT News Officehttp://news.mit.edu/2016/rna-nanoscale-brain-0704<p>Cells contain thousands of messenger RNA molecules, which carry copies of DNA’s genetic instructions to the rest of the cell. MIT engineers have now developed a way to visualize these molecules in higher resolution than previously possible in intact tissues, allowing researchers to precisely map the location of RNA throughout cells.</p>
<p>Key to the new technique is expanding the tissue before imaging it. By making the sample physically larger, it can be imaged with very high resolution using ordinary microscopes commonly found in research labs.</p>
<p>“Now we can image RNA with great spatial precision, thanks to the expansion process, and we also can do it more easily in large intact tissues,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, a member of MIT’s Media Lab and McGovern Institute for Brain Research, and the senior author of a paper describing the technique in the July 4 issue of <em>Nature Methods</em>.</p>
<div class="cms-placeholder-content-video"></div>
<p>Studying the distribution of RNA inside cells could help scientists learn more about how cells control their gene expression and could also allow them to investigate diseases thought to be caused by failure of RNA to move to the correct location.</p>
<p>Boyden and colleagues <a href="https://news.mit.edu/2015/enlarged-brain-samples-easier-to-image-0115">first described the underlying technique</a>, known as expansion microscopy (ExM), last year, when they used it to image proteins inside large samples of brain tissue. In a paper appearing in <em>Nature Biotechnology </em>on July 4, the MIT team has now presented a new version of the technology that employs off-the-shelf chemicals, making it easier for researchers to use.</p>
<p>MIT graduate students Fei Chen and Asmamaw Wassie are the lead authors of the <em>Nature Methods</em> paper, and Chen and graduate student Paul Tillberg are the lead authors of the <em>Nature Biotechnology</em> paper.</p>
<p><strong>A simpler process</strong></p>
<p>The original expansion microscopy technique is based on embedding tissue samples in a polymer that swells when water is added. This tissue enlargement allows researchers to obtain images with a resolution of around 70 nanometers, which was previously possible only with very specialized and expensive microscopes.</p>
<p>However, that method posed some challenges because it requires generating a complicated chemical tag consisting of an antibody that targets a specific protein, linked to both a fluorescent dye and a chemical anchor that attaches the whole complex to a highly absorbent polymer known as polyacrylate. Once the targets are labeled, the researchers break down the proteins that hold the tissue sample together, allowing it to expand uniformly as the polyacrylate gel swells.</p>
<p>In their new studies, to eliminate the need for custom-designed labels, the researchers used a different molecule to anchor the targets to the gel before digestion. This molecule, which the researchers dubbed AcX, is commercially available and therefore makes the process much simpler.</p>
<p>AcX can be modified to anchor either proteins or RNA to the gel. In the <em>Nature Biotechnology </em>study, the researchers used it to anchor proteins, and they also showed that the technique works on tissue that has been previously labeled with either fluorescent antibodies or proteins such as green fluorescent protein (GFP).</p>
<p>“This lets you use completely off-the-shelf parts, which means that it can integrate very easily into existing workflows,” Tillberg says. “We think that it’s going to lower the barrier significantly for people to use the technique compared to the original ExM.”</p>
<p>Using this approach, it takes about an hour to scan a piece of tissue 500 by 500 by 200 microns, using a light sheet fluorescence microscope. The researchers showed that this technique works for many types of tissues, including brain, pancreas, lung, and spleen.</p>
<p><strong>Imaging RNA</strong></p>
<p>In the <em>Nature Methods</em> paper, the researchers used the same kind of anchoring molecule but modified it to target RNA instead. All of the RNAs in the sample are anchored to the gel, so they stay in their original locations throughout the digestion and expansion process.</p>
<p>After the tissue is expanded, the researchers label specific RNA molecules using a process known as fluorescence in situ hybridization (FISH), which was originally developed in the early 1980s and is widely used. This allows researchers to visualize the location of specific RNA molecules at high resolution, in three dimensions, in large tissue samples.</p>
<p>This enhanced spatial precision could allow scientists to explore many questions about how RNA contributes to cellular function. For example, a longstanding question in neuroscience is how neurons rapidly change the strength of their connections to store new memories or skills. One hypothesis is that RNA molecules encoding proteins necessary for plasticity are stored in cell compartments close to the synapses, poised to be translated into proteins when needed.</p>
<p>With the new system, it should be possible to determine exactly which RNA molecules are located near the synapses, waiting to be translated.</p>
<p>“People have found hundreds of these locally translated RNAs, but it’s hard to know where exactly they are and what they’re doing,” Chen says. “This technique would be useful to study that.”</p>
<p>Boyden’s lab is also interested in using this technology to trace the connections between neurons and to classify different subtypes of neurons based on which genes they are expressing.</p>
<p>Paola Arlotta, a professor of stem cell and regenerative biology at Harvard University who was not involved in the research, describes the new technology as potentially revolutionary.</p>
<p>“In complex tissues like the brain or tumors, there are so many different cell types. It’s hard to distinguish one type of cell from the next, or to tell where certain molecules of RNA would be expressed,” Arlotta says. “This technology is very enabling for a lot of biology that we’ve been waiting to do.”</p>
<p>The research was funded by the Open Philanthropy Project, the New York Stem Cell Foundation Robertson Award, the National Institutes of Health, the National Science Foundation, and Jeremy and Joyce Wertheimer.</p>
MIT researchers have developed a new way to image proteins and RNA inside neurons of intact brain tissue. Image: Yosuke Bando, Fei Chen, Dawen Cai, Ed Boyden, and Young GyuResearch, RNA, Neuroscience, Brain and cognitive sciences, Biological engineering, McGovern Institute, Media Lab, School of Engineering, School of Science, School of Architecture and Planning, Nanoscience and nanotechnologyMusic of the (data) sphereshttp://news.mit.edu/2016/media-lab-quantizer-particle-collisions-science-art-0628
A new MIT project taps into particle collisions to generate music and forge harmony between science and art.Tue, 28 Jun 2016 18:00:01 -0400Margaret Evans | MIT Media Labhttp://news.mit.edu/2016/media-lab-quantizer-particle-collisions-science-art-0628<p>The concept of turning data into music has been around for centuries. Ancient philosophers believed that proportions and patterns in the movements of the sun, moon, and planets created a celestial “Musica Universalis.” But that “music of the spheres,” as it’s known, was metaphorical. Now MIT researchers have invented a new platform, called <a href="http://quantizer.media.mit.edu/" target="_blank">Quantizer</a>, which live-streams music driven by real-time data from research that explores the very beginnings of our universe.</p>
<p>The <a href="http://atlas.cern/discover/about" target="_blank">ATLAS experiment</a> has granted MIT Media Lab researchers unique access to a live feed of particle collisions detected at the Large Hadron Collider (LHC), the world’s largest particle accelerator, at CERN in Switzerland. A subset of the collision event data is routed to Quantizer, an MIT-built sonification engine that catches and converts the incoming raw data into sounds, notes, and rhythms in real-time.</p>
<div class="cms-placeholder-content-video"></div>
<p>“You can imagine that when these protons collide almost at light speed in the LHC’s massive tunnel, the detector ‘lights up’ and can help us understand what particles were created from the initial collisions,” says Juliana Cherston, a master’s student in the <a href="http://www.media.mit.edu/research/groups/responsive-environments" target="_blank">Responsive Environments group</a> at the MIT Media Lab. Her group director and adviser is <a href="http://www.media.mit.edu/people/joep" target="_blank">Joe Paradiso</a>, who worked at CERN on early LHC detector designs and has long been interested in mapping physics data to music. Cherston conceived Quantizer and designed it in collaboration with Ewan Hill, a doctoral student from the University of Victoria in Canada.</p>
<p>Cherston says that “part of the platform’s job is to intelligently select which data to sonify — for example, by streaming only from the regions with highest detected energy.” She explains that Quantizer gets real-time information, such as energy and momentum, from different detector subsystems. “The data we use in our software relates, for example, to geometric information about where the energies are found and not found, as well as the trajectories of particles that are built up in the ATLAS detector’s innermost layer.” &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</p>
<p>Tapping into the geometry and energy of the collision event properties, the sonification engine scales and shifts the data to make sure the output is audible. In its default mode, Quantizer assigns that output to different musical scales, such as chromatic and pentatonic, and composers can build music or <a href="http://lucerne.media.mit.edu:3011/" target="_blank">soundscapes</a>. They can also map the data to sonic parameters that are not tied to specific musical scales or other conventions, allowing a freer audio interpretation. Via an audio stream, anyone can <a href="http://quantizer.media.mit.edu/" target="_blank">listen</a> in near real-time via the platform’s website. So far, it includes music streams, which change according to the data feed, in three mappings related to genres that the creators term cosmic, suitar samba, and house. The team soon plans to include other compositions that map the data into additional musical pieces that could sound quite different. In principle, Quantizer behaves much like a reimagined radio in which people can “tune in” to ATLAS data interpreted via the aesthetics of different composers.</p>
<p><strong>Quantizing the composer’s vision</strong></p>
<p>Evan Lynch '16 MNG '16, a recent MIT graduate in physics and computer science, composed the cosmic stream on the site. A cellist who minored in music, Lynch collaborated on the project with Cherston at the Media Lab. “I imagined what an LHC collision is like and how the particles go through the different layers of the detector and came up with general vision of the sounds I wanted to represent that with. Then I worked with Juliana to pipeline the data that would drive that vision so that the specifics of the sound would be based on underlying reality. It is an artistic representation, but it’s based on the real thing.”</p>
<p>“The sonification engine allows composers to select the most interesting data to sonify,” says Cherston, “and provides both default tools for producing more structured audio as well as the option for composers to build their own synthesizers compatible with the processed data streams.” Composers can control some parameters, such as pitch, timbre, audio duration, and beat structures in line with their musical vision, but the music is essentially data-driven.</p>
<p>Quantizer is in its nascent stage, and Cherston says a crucial aspect of the project moving forward is working closely with composers with whom she shares Quantizer’s code base. Cherston says “there’s a lot of back-and-forth feedback that informs modifications and new features we add to the platform. My long-term goal would be to simplify and move some of the musical mapping controls to a website for broader access.”</p>
<p><strong>Music unravels physics</strong></p>
<p>Cherston, who worked on the ATLAS experiment when she was a Harvard University undergraduate, says that public outreach is the key reason its officials have granted MIT access to its experiment data for Quantizer. It’s not simply about turning data into music; it’s also about offering a novel way for everyone to connect with the data and experience its characteristics, Cherston says. “It shakes the mind to imagine that you could create aesthetically compelling audio from the same data that allows us to understand some of the most fundamental processes that take place in our universe.”</p>
<p>Paradiso, a former high-energy physicist and longtime musician, agrees. He says that Quantizer shows “how large sources of data emerging all over the world will become a ‘canvas’ for new art. Quantizer lets people connect to the ATLAS experiment in a totally different way. What we hope for is that people will hear and enjoy this; that they’ll relate to physics in a different way, then start to explore the science.”</p>
<p>Though musical interpretations of physics data have been made before, to the Quantizer team’s knowledge, this is the first platform that runs in real-time and implements a general framework that can be used to support different compositions. The Quantizer site attracted close to 30,000&nbsp;visitors in its first month online, after Cherston and her collaborators released their <a href="http://dl.acm.org/citation.cfm?id=2892295&amp;CFID=794630848&amp;CFTOKEN=52259421" target="_blank">paper</a> at the 2016 Conference on Human Factors in Computing Systems (CHI) conference. They’ll next present the project in July at the annual New Interfaces in Musical Expression (NIME) global gathering.</p>
<div>
<div>
<div>
</div>
</div>
</div>
A collision event display, generated by the ATLAS experimentImage: ATLAS-CERNMedia Lab, Particles, Physics, Music, Arts, Technology and society, Research, School of Architecture and PlanningDriverless cars: Who gets protected?http://news.mit.edu/2016/driverless-cars-safety-issues-0623
Study shows inconsistent public opinion on safety of driverless cars.
Thu, 23 Jun 2016 14:00:00 -0400Peter Dizikes | MIT News Officehttp://news.mit.edu/2016/driverless-cars-safety-issues-0623<p>Driverless cars pose a quandary when it comes to safety. These autonomous vehicles are programmed with a set of safety rules, and it is not hard to construct a scenario in which those rules come into conflict with each other. Suppose a driverless car must either hit a pedestrian or swerve in such a way that it crashes and harms its passengers. What should it be instructed to do?</p>
<p>A newly published study co-authored by an MIT professor shows that the public is conflicted over such scenarios, taking a notably inconsistent approach to the safety of autonomous vehicles, should they become a reality on the roads.</p>
<p>In a series of surveys taken last year, the researchers found that people generally take a utilitarian approach to safety ethics: They would prefer autonomous vehicles to minimize casualties in situations of extreme danger. That would mean, say, having a car with one rider swerve off the road and crash to avoid a crowd of 10 pedestrians. At the same time, the survey’s respondents said, they would be much less likely to use a vehicle programmed that way.</p>
<p>Essentially, people want driverless cars that are as pedestrian-friendly as possible — except for the vehicles they would be riding in.</p>
<p>“Most people want to live in in a world where cars will minimize casualties,” says Iyad Rahwan, an associate professor in the MIT Media Lab and co-author of a new paper outlining the study. “But everybody wants their own car to protect them at all costs.”</p>
<p>The result is what the researchers call a “social dilemma,” in which people could end up making conditions less safe for everyone by acting in their own self-interest.</p>
<p>“If everybody does that, then we would end up in a tragedy … whereby the cars will not minimize casualties,” Rahwan adds.</p>
<p>Or, as the researchers write in the new paper, “For the time being, there seems to be no easy way to design algorithms that would reconcile moral values and personal self-interest.”</p>
<p>The paper, “The social dilemma of autonomous vehicles,” is being published today in the journal <em>Science</em>. The authors are Jean-Francois Bonnefon of the Toulouse School of Economics; Azim Shariff, an assistant professor of psychology at the University of Oregon; and Rahwan, the AT&amp;T Career Development Professor and an associate professor of media arts and sciences at the MIT Media Lab.</p>
<p><strong>Survey says</strong></p>
<p>The researchers conducted six surveys, using the online Mechanical Turk public-opinion tool, between June 2015 and November 2015.</p>
<p>The results consistently showed that people will take a utilitarian approach to the ethics of autonomous vehicles, one emphasizing the sheer number of lives that could be saved. For instance, 76 percent of respondents believe it is more moral for an autonomous vehicle, should such a circumstance arise, to sacrifice one passenger rather than 10 pedestrians.</p>
<p>But the surveys also revealed a lack of enthusiasm for buying or using a driverless car programmed to avoid pedestrians at the expense of its own passengers. One question asked respondents to rate the morality of an autonomous vehicle programmed to crash and kill its own passenger to save 10 pedestrians; the rating dropped by a third when respondents considered the possibility of riding in such a car.</p>
<p>Similarly, people were strongly opposed to the idea of the government regulating driverless cars to ensure they would be programmed with utilitarian principles. In the survey, respondents said they were only one-third as likely to purchase a vehicle regulated this way, as opposed to an unregulated vehicle, which could presumably be programmed in any fashion.&nbsp;</p>
<p>“This is a challenge that should be on the mind of carmakers and regulators alike,” the scholars write. Moreover, if autonomous vehicles actually turned out to be safer than regular cars, unease over the dilemmas of regulation “may paradoxically increase casualties by postponing the adoption of a safer technology.”</p>
<p><strong>Empirically informed</strong></p>
<p>The aggregate performance of autonomous vehicles on a mass scale is, of course, yet to be determined. For now, ethicists say the survey offers interesting and novel data in an area of emerging moral interest.</p>
<p>“I think the authors are definitely correct to describe this as a social dilemma,” says Joshua Greene, a professor of psychology at Harvard University, who has written a commentary on the research for <em>Science</em>, noting, “The critical feature of a social dilemma is a tension between self-interest and collective interest.” Greene adds that the researchers “clearly show that people have a deep ambivalence about this question.”</p>
<p>The researchers, for their part, acknowledge that public-opinion polling on this issue is at a very early stage, which means any current findings “are not guaranteed to persist,” as they write in the paper, if the landscape of driverless cars evolves.</p>
<p>Still, concludes Rahwan, “I think it was important to not just have a theoretical discussion of this, but to actually have an empirically informed discussion.”</p>
“Most people want to live in in a world where cars will minimize casualties,” says Iyad Rahwan, an associate professor in the MIT Media Lab and co-author of a new paper outlining the study. “But everybody wants their own car to protect them at all costs.”
Courtesy of the researchersMedia Lab, Ethics, Automobiles, Autonomous vehicles, Research, Transportation, Technology and society, School of Architecture and PlanningNeed hair? Press “print”http://news.mit.edu/2016/3-d-print-hair-0617
With fur, brushes, and bristles, Media Lab’s technique opens new frontier in 3-D printing.Thu, 16 Jun 2016 23:59:59 -0400Jennifer Chu | MIT News Officehttp://news.mit.edu/2016/3-d-print-hair-0617<p>These days, it may seem as if 3-D printers can spit out just about anything, from a full-sized sports car, to edible food, to human skin. But some things have defied the technology, including hair, fur, and other dense arrays of extremely fine features, which require a huge amount of computational time and power to first design, then print.</p>
<p>Now researchers in MIT’s Media Lab have found a way to bypass a major design step in 3-D printing, to quickly and efficiently model and print thousands of hair-like structures. Instead of using conventional computer-aided design (CAD) software to draw thousands of individual hairs on a computer — a step that would take hours to compute — the team built a new software platform, called “Cilllia,” that lets users define the angle, thickness, density, and height of thousands of hairs, in just a few minutes.</p>
<p>Using the new software, the researchers designed arrays of hair-like structures with a resolution of 50 microns — about the width of a human hair. Playing with various dimensions, they designed and then printed arrays ranging from coarse bristles to fine fur, onto flat and also curved surfaces, using a conventional 3-D printer. They presented a paper detailing the results at the Association for Computing Machinery’s CHI Conference on Human Factors in Computing Systems in May.</p>
<p><img alt="" src="/sites/mit.edu.newsoffice/files/cilllia_gif06.gif" style="width: 560px; height: 315px;" />\</p>
<p><span style="font-size:10px;">The researchers attached the 3-D printed hairs to a ring. (Courtesy of the researchers)</span></p>
<p>Could the technology be used to print wigs and hair extensions? Possibly, say the researchers. But that’s not their end goal. Instead, they’re seeing how 3-D-printed hair could perform useful tasks such as sensing, adhesion, and actuation.</p>
<p>To demonstrate adhesion, the team printed arrays that act as Velcro-like bristle pads. Depending on the angle of the bristles, the pads can stick to each other with varying forces. For sensing, the researchers printed a small furry rabbit figure, equipped with LED lights that light up when a person strokes the rabbit in certain directions.</p>
<p>And to see whether 3-D-printed hair can help actuate, or move objects, the team fabricated a weight-sorting table made from panels of printed hair with specified angles and heights. As a small vibration source shook the panels, the hairs were able to move coins across the table, &nbsp;sorting them based on the coins’ weight and the vibration frequency.</p>
<p>Jifei Ou, a graduate student in media arts and sciences, says the work is inspired by hair-like structures in nature, which provide benefits such as warmth, in the case of human hair, and movement, in the case of cilia, which help remove dust from the lungs.</p>
<p>“It’s very inspiring to see how these structures occur in nature and how they can achieve different functions,” Ou says. “We’re just trying to think how can we fully utilize the potential of 3-D printing, and create new functional materials whose properties are easily tunable and controllable.”</p>
<p>Ou is lead author on the paper, which also includes graduate students Gershon Dublon and Chin-Yi Cheng; Felix Heibeck, a former research assistant; Hiroshi Ishii, the Jerome B. Wiesner Professor in media arts and sciences; and Karl Willis of Addimation, Inc.</p>
<p><strong>A software challenge</strong></p>
<p>The resolution of today’s 3-D printers is “already pretty high,” Ou says. “But we’re not using [3-D printing] to the best of its capabilities.”</p>
<p>The team looked for things to print that would test the technology’s limits. Hair, as it turns out, was the perfect subject.&nbsp;</p>
<p>“[Hair] comes with a challenge that is not on the hardware, but on the software side,” Ou says.</p>
<p><img alt="" src="/sites/mit.edu.newsoffice/files/cilllia_gif05.gif" style="width: 560px; height: 315px;" /></p>
<p><span style="font-size:10px;">The 3-D printed hairs act like Velcro. (Courtesy of the researchers)</span></p>
<p>To 3-D-print hair using existing software, designers would have to model hair in CAD, drawing out each individual strand, then feed the drawing through a slicer program that represents each hair’s contour as a mesh of tiny triangles. The program would then create horizontal cross sections of the triangle mesh, and translate each cross section into pixels, or a bitmap, that a printer could then print out, layer by layer.</p>
<p>Ou says designing a stamp-sized array of 6,000 hairs using this process would take several hours to process.</p>
<p>“If you were to load this file into a normal slicing program, it would crash the program,” he says.</p>
<p><strong>Hair pixels</strong></p>
<p>To design hair, the researchers chose to do away with CAD modeling entirely. Instead, they built a new software platform to model first a single hair and then an array of hairs, and finally to print arrays on both flat and curved surfaces.</p>
<p>The researchers modeled a single hair by representing an elongated cone as a stack of fewer and fewer pixels, from the base to the top. To change the hair’s dimensions, such as its height, angle, and width, they simply changed the arrangement of pixels in the cone.</p>
<p>To scale up to thousands of hairs on a flat surface, Ou and his team used Photoshop to generate a color mapping technique. They used three colors — red, green, and blue — to represent three hair parameters — height, width, and angle. For example, to make a circular patch of hair with taller strands around the rim, they drew a red circle and changed the color gradient in such a way that darker hues of red appeared around the circle’s rim, denoting taller hairs. They then developed an algorithm to quickly translate the color map into a model of a hair array, which they then fed to a 3-D printer.</p>
<p>Using these techniques, the team printed pads of Velcro-like bristles, and paintbrushes with varying textures and densities.</p>
<p><img alt="" src="/sites/mit.edu.newsoffice/files/cilllia_gif01.gif" style="width: 560px; height: 315px;" /></p>
<p><span style="font-size:10px;">Vibrations cause a piece of metal to move across the 3-D printed hairs. (Courtesy of the researchers)</span></p>
<p><strong>Fuzzing drawing</strong></p>
<p>Printing hair on curved surfaces proved trickier. To do this, the team first imported a CAD drawing of a curved surface, such as a small rabbit, then fed the model through a slicing program to generate a triangle mesh of the rabbit shape. They then developed an algorithm to locate the center of each triangle’s base, then virtually drew a line out, perpendicular to the triangle’s base, to represent a single hair. Doing this for every triangle in the mesh created a dense array of hairs running perpendicular to the rabbit’s curved surface.</p>
<p>The researchers then used their color mapping techniques to quickly customize the rabbit hair’s thickness and stiffness.</p>
<p>“With our method, everything becomes smooth and fast,” Ou says. “Previously it was virtually impossible, because who’s going to take a whole day to render a whole furry rabbit, and then take another day to make it printable?”</p>
<p>Among other applications, Ou says 3-D-printed hair may be used in interactive toys. To demonstrate, his team inserted an LED light into the fuzzy printed rabbit, along with a small microphone that senses vibrations. With this setup, the bunny turns green when it is petted in the correct way, and red when it is not.</p>
<p>“The ability to fabricate customized hair-like structures not only expands the library of 3-D-printable shapes, but also enables us to design alternative actuators and sensors,” the authors conclude in their paper. “3-D-printed hair can be used for designing everyday interactive objects.”</p>
<p>Kelly Schaefer, a designer at IDEO, a design consulting firm, says “this type of work expands the possibilities of 3-D printing as an industry because of the new applications it suggests.”</p>
<p>“Perhaps more inspiring than any single output from this team is the idea of rethinking the 3-D printing process itself and the purpose of 3-D printed objects,” says Schaefer, who was not involved in the research. “The Cilllia team has challenged some of the current constraints of 3-D printing processes, which makes me wonder what other constraints can be challenged and potentially eliminated.”</p>
“It’s very inspiring to see how these [hair-like] structures occur in nature and how they can achieve different functions,” says Jifei Ou, a graduate student in media arts and sciences at MIT. “We’re just trying to think how can we fully utilize the potential of 3-D printing, and create new functional materials whose properties are easily tunable and controllable.” Pictured is an example of 3-D printed hair. Courtesy of Tangible Media Group/MIT Media Lab3-D printing, 3-D, Computer modeling, Computer science and technology, Design, Media Lab, Research, Software, School of Architecture and PlanningMixing solids and liquids enhances optical properties of bothhttp://news.mit.edu/2016/mixing-solids-liquids-enhances-optical-properties-0609
New approach can dramatically change the extent to which optical devices scatter light.Thu, 09 Jun 2016 00:00:00 -0400Larry Hardesty | MIT News Officehttp://news.mit.edu/2016/mixing-solids-liquids-enhances-optical-properties-0609<p>By immersing glass particles in a fluid, researchers at MIT’s Media Lab and Harvard University are exploring a new mechanism for modifying an optical device’s diffusivity, or the extent to which it scatters light.</p>
<p>In its current form, the new diffuser could be used to calibrate a wide range of imaging systems, but the researchers believe that their mechanism could ultimately lead to holographic video screens or to tunable optical devices with applications in imaging, sensing, and photography.</p>
<p>In experiments, the solid-liquid mixture demonstrated much more dramatic changes in diffusivity than existing theory would have predicted, so the researchers also developed a new computer model to describe it. That model could help them devise more complex applications for the basic technology.</p>
<p>The researchers describe their new work in the latest issue of the American Chemical Society’s <em>ACS Photonics</em> journal.</p>
<p>The fluid and the glass in the prototype were chosen because they have very similar refractive indices, meaning light travels through them at similar speeds. When light moves from a material with a high refractive index to one with a lower refractive index, it changes direction; this is the phenomenon behind the familiar illusion of a straw’s appearing to bend when it’s inserted into a glass of water.</p>
<p>The researchers’ prototype exploits the fact that changes in temperature alter materials’ refractive indices.</p>
<p>“It’s hard to find a solid and liquid that have exactly the same refractive index at room temperature,” says Barmak Heshmat, a postdoc in the Media Lab’s Camera Culture group and corresponding author on the paper. “But if the speed at which the refractive index changes for solid and liquid is different — which is the case for most solids and liquids — then at a certain temperature they will exactly match, to the last digit. That’s why you see this giant jump in transparency.”</p>
<p>Heshmat is joined on the paper by Ramesh Raskar, the NEC Career Development Associate Professor of Media Arts and Sciences and head of the Camera Culture group, and Benedikt Groever, a graduate student in engineering and applied science at Harvard.</p>
<p><strong>Study in contrast</strong></p>
<p>In their experiments, the researchers found that a temperature change of 10 degrees would increase the diffusivity of their device tenfold, and a change of 42 degrees changed it a thousandfold.</p>
<p>Heshmat believes that a temperature-modulated version of his team’s filter could be used to calibrate sensors used in the study of material flows, the study of cells, and medical imaging.</p>
<p>For instance, medical-imaging systems are typically calibrated using devices called “tissue phantoms,” which duplicate the optical properties of different types of biological tissues. Tissue phantoms can be expensive, and many of them may be required to calibrate a single imaging device. Heshmat believes that a low-cost version of his team’s filter could mimic a wide range of tissues.</p>
<p>But the fundamental principle illustrated by the researchers’ prototype could have broader ramifications. The effect of heat on the refractive index of either the solid or the fluid, taken in isolation, is very subtle. But when the two are mixed together, the effect on diffusivity is dramatic.</p>
<p>The same would be true, Heshmat argues, of other types experimental materials whose refractive indices change in response to either light or an electric field. And optical or electrical activation would broaden the range of applications for tunable optical devices.</p>
<p>“If you have photorefractive changes in a solid material in a solid phase, the amount of change you can get between the solid and itself is very small,” he explains. “You need a very strong field to see that change in your refractive index. But if you have two types of media, the refractive index of the solid is going to change much faster compared to the liquid. So you get this deep contrast that can help a lot.”</p>
<p><strong>Application</strong></p>
<p>In holographic displays, cells filled with a mixture of electrically responsive solid materials and a fluid could change their diffusivity when charged by an electrode, in much the way that cells filled with ionized gas change their color in plasma TVs. Adjacent cells could thus steer light in slightly different directions, mimicking the reflection of light off of a contoured surface and producing the illusion of three-dimensionality.</p>
<p>Liquid-solid mixtures could also be used to produce tunable diffraction gratings, which are used in some sensing applications to filter out light or other electromagnetic radiation of particular frequencies, or in tunable light diffusers of the sort photographers use to make the strongly directional light of a flash feel more like ambient light.</p>
<p>The computer model that the researchers describe in their paper predicts the diffusivity of a liquid-solid mixture on the basis of the physical characteristics of the solid particles — how jagged or spiky they are — and on their concentration in the liquid. That model, Heshmat says, could be used to develop solid particles tailored to specific applications.</p>
<p>The appeal of this method may even reach beyond science and engineering. “I understand the obvious potential scientific applications listed in the abstract,” says Aydogan Ozcan, a professor of electrical engineering at the University of California at Los Angeles. “But I think this kind of approach could potentially be useful for designing new artwork — for interior design, for example. You can design furniture parts or artwork that will change the light-matter interaction and visual perception on demand or through a programmed interface, which would bring dynamic light effects indoors. Similarly, it can be used in architectural designs to replace curtains by structured interfaces.”</p>
A mild temperature change radically alters the degree to which a solid-fluid mixture bends light.Courtesy of the researchersResearch, Light, Optics, Physics, Media Lab, School of School of Architecture and PlanningMIT at the Venice Biennalehttp://news.mit.edu/2016/mit-venice-biennale-0527
On a global stage, MIT helps steer architecture toward solving worldwide challenges.Fri, 27 May 2016 00:00:00 -0400Thomas Gearty | School of Architecture and Planninghttp://news.mit.edu/2016/mit-venice-biennale-0527<p>At the 2016 Venice Architecture Biennale, opening Saturday, architects and designers have responded to a charge to “report from the front” on major challenges and issues facing humanity around the globe.</p>
<p>In installations throughout Venice — from the historic venues of the Arsenale and Giardini on the island’s eastern tip, to repurposed palazzos and churches across the city — MIT faculty, alumni, and students are among the contributors offering varied and potent responses. Their efforts, considered alongside numerous others displayed in scores of exhibitions and pavilions, may signal a paradigm shift for architecture, participants say.</p>
<p>Considered one of the foremost global forums for architecture and the built environment, and drawing hundreds of thousands of visitors from around the world, the Architecture Biennale takes place every two years in Venice. The 2016 curator, Chilean architect and Pritzker Prize winner Alejandro Aravena, chose as his theme “reporting from the front,” focusing on architecture’s capacity to improve the human condition by addressing problems such as segregation, inequality, suburbia, sanitation, natural disasters, housing shortages, migration, crime, traffic, waste, pollution, and community participation.</p>
<p>“If the current condition is that you deal with only projects that interest other architects, then let’s [instead] try to start from projects that interest every single citizen,” said Aravena. “Once that is done, then use the specific knowledge of architecture to address those issues — go from nonspecific problems through the specific knowledge of architecture to try to make a contribution.”</p>
<p>If past versions of the Biennale have sometimes leaned toward the high-concept and avant-garde, Aravena’s issue-oriented theme gives this year’s event a feeling of purpose and application.</p>
<p>“It's fascinating to see our faculty leading on multiple fronts: thinking through architecture's relationship with building and design technologies, geopolitics, and resource depletion,” said J. Meejin Yoon, professor and head of the Department of Architecture at MIT. “The Biennale brings together architects and designers from around the world — and, in this exhibition in particular, those who represent an incredible commitment to positive change in society and environment through architecture and design.”</p>
<p>The MIT presence at the Biennale is widespread in terms of both geography and issues. The faculty, alumni, and students presenting in Venice comprise more than a dozen nationalities and countries of origin, and their Biennale projects represent projects on five continents. Their areas of concern are just as broad.</p>
<p>Some MIT-related projects introduce technological innovations for building but then apply them to social and environmental concerns. John Ochsendorf, the Class of 1942 Professor of Architecture and professor of civil and environmental engineering is part of two projects — in collaboration with MIT alumni Matthew DeJong and Philippe Block — that demonstrate the structural, economic, and environmental benefits of compression vaults. Assistant professor of architecture Alexander D’Hooghe and his firm, ORG, created an innovative system of modular concrete panels; in a design for an urban market for immigrants in Brussels, the structure reinforces the notion of an open society.</p>
<p>Other projects consider how rural and urban contexts shape the design process. The exhibition from Ensamble Studio, headed by professor of architecture Antón García-Abril and MIT research scientist Débora Mesa, examines the often conflicting challenges of designing for urban or natural settings. Visiting professor Clara Solà-Morales considers the constructed environment as more than just walls and explores how the landscape becomes part of architecture and vice versa.</p>
<p>Several contributions celebrate, critique, and reveal history through the built environment. In the Brazilian Pavilion, MIT alumna and urban planner Sara Zewde’s “Circuit of African Heritage” presents a plan for a series of historic sites in Rio de Janeiro to acknowledge the black experience and its contributions to the country’s culture. Gediminas Urbonas, director of the MIT Program in Art, Culture, and Technology, and MIT research affiliate Nomeda Urboniene, also traces the impact of a historic structure: the Druzhba pipeline built by the Soviet Union. In the Baltic Pavilion, located in a local gymnasium, their thought-provoking installation spills down the bleachers like an oil slick.</p>
<p>The MIT contributions also include more speculative investigations. Kevin Slavin, director of the Playful Systems Group in the MIT Media Lab, has developed a modified beehive that captures “bee debris” for genetic sequencing, to create microbiological portraits of cities and neighborhoods; he has installed a working hive at the Palazzo Mora alongside videos visualizations of bee-sourced data. Rania Ghosn and her practice, Design Earth, imagines what the post-oil urban and social landscape of the Pan-Gulf region.</p>
<p>“We come here for inspiration, we come here to share ideas, we try to show how architecture can change the world. The Biennale is really about celebrating architectural design on every scale and every level,” said Ochsendorf. “In my case, it’s the first time I’ve ever been to a Biennale, so it’s exciting to be here, but it’s also very exciting to have a chance to showcase our research over 15 years coming to fruition and having a chance to share that with the world.”</p>
The Venice Architecture Biennale, held every two years at the sprawling Arsenale (above) and Giardini grounds on the city’s eastern tip, opens this weekend. This year’s focus on architecture’s ability to address global challenges — including numerous installations and projects from MIT faculty, students, and alumni — may indicate a paradigm shift for architecture, participants say.Photo: Thomas GeartySchool of Architecture and Planning, Italy, Staff, Faculty, Alumni/ae, Global, Arts, Design, Arts, Culture and Technology, Media Lab, Technology and society, ArchitectureGrant funds innovation in teacher educationhttp://news.mit.edu/2016/grant-funds-innovation-teacher-education-0503
New program provides funding for three projects working to improve STEM education.Tue, 03 May 2016 17:30:01 -0400Sarah Jane Vaughan | Office of Digital Learninghttp://news.mit.edu/2016/grant-funds-innovation-teacher-education-0503<p>The Teaching and Learning Innovation Grants (TLIG) program provides seed funding for MIT community members to bring new ideas for STEM teaching and teacher education to life. The program, funded by the Woodrow Wilson National Fellowship Foundation and administered by the MIT <a href="http://tsl.mit.edu" target="_blank">Teaching Systems Lab</a> (TSL), supports projects with the potential to make a significant impact on teacher education. The goals of the research program are to transform STEM teacher education nationally and globally and to provide new insights and thinking for the <a href="http://woodrowacademy.org/" target="_blank">Woodrow Wilson Academy of Teaching and Learning</a>, a new, independent, competency-based teacher education program.</p>
<p>In this first year, the TLIG program allocated $250,000 to support three ambitious proposals:</p>
<ul>
<li>"Just The Facts: Synthetic biology for co-evolution of master teachers and a BioBuilder curriculum" (Natalie Kuldell), a proposal to design and disseminate a unique teacher-informed evolutionary biology curriculum, using real-world synthetic biology experiments;</li>
<li>"Informal Science Education For Learners, Parents, and Educators" (Laura Schulz), a research program to introduce children and their parents to scientific inquiry and leverage early science learning in an informal, media-based format; and</li>
<li>"Interest-Based Pathways into Coding: Developing Strategies and Materials to Help Teachers Engage a Broader Range of Students in Computational Thinking" (Mitchel Resnick), a program to develop new learning materials and workshops to help teachers support interest-based approaches to coding, providing students with opportunities to learn computational concepts and skills by working on projects related to their personal interests.</li>
</ul>
<p>In this inaugural year, the TLIG program received 16 applications from schools and departments across MIT. “The quality of the proposals was exceptional, and it’s exciting to see the incredible research potential in PK-12 learning across MIT,” said Justin Reich, executive director of TSL. “From building cutting edge STEM curriculum to finding new applications for technology in teaching and learning, these projects continue a long tradition at MIT of innovation in PK-12 education.”</p>
<p>A committee of MIT faculty and staff, along with representatives from the new Woodrow Wilson Academy of Teaching and Learning, selected the three proposals to fund. Among the reviewers was John Gabrieli, the Grover M. Hermann Professor in the Department of Brain and Cognitive Sciences and founding director of the <a href="http://mitili.mit.edu/" target="_blank">MIT Integrated Learning Initiative</a> (MITili). “One of the central goals of the MIT Integrated Learning Initiative is to build strong connections between foundational research in science and applications to practice,” said Gabrieli, “and the recipients of the inaugural Teaching and Learning Innovation Grants offer powerful exemplars of this approach.”</p>
<p>The MIT TSL will provide funding support for these projects and an opportunity to build community across these diverse PK-12 initiatives. The Woodrow Wilson Academy will provide access to schools through district partners so that new innovations can be tested and co-designed with students and practicing educators. “Cutting-edge research is going to be central to the development of our new teacher preparation program,” said Deborah Hirsch, executive director of the Woodrow Wilson Academy of Teaching and Learning, “and we look forward to building connections between MIT researchers and our partner&nbsp;educators to advance research into effective teaching and learning.”&nbsp;</p>
<p>These projects represent some of MIT’s extraordinary ideas for innovation in teacher education. TSL will support this research into the horizons of STEM teaching and use the results to inform the Woodrow Wilson Academy’s efforts to prepare teachers for the classroom. “I believe these projects will help transform teacher education, and we are excited to be working with the Woodrow Wilson Academy to rethink and redesign how we prepare teachers for the classroom,” said Reich. “This new grant program showcases the incredible untapped potential at MIT, and the passion that our faculty and research staff have for improving teaching and learning in schools.”</p>
A committee of MIT faculty and staff, along with representatives from the new Woodrow Wilson Academy of Teaching and Learning, selected three proposals to fund.STEM education, K-12 education, online learning, Office of Digital Learning, Grants, Funding, Learning, education, Education, teaching, academics, Media Lab, Brain and cognitive sciences, Biological engineering, School of Architecture and Planning, School of Science, School of Engineering, Faculty, ResearchCan technology help teach literacy in poor communities?http://news.mit.edu/2016/literacy-apps-poor-communities-0426
Project to provide children with tablets loaded with literacy apps reports encouraging results in Africa, U.S.
Tue, 26 Apr 2016 11:30:00 -0400Larry Hardesty | MIT News Officehttp://news.mit.edu/2016/literacy-apps-poor-communities-0426<p>For the past four years, researchers at MIT, Tufts University, and Georgia State University have been conducting a study to determine whether tablet computers loaded with literacy applications could improve the reading preparedness of young children living in economically disadvantaged communities.</p>
<p>At the Association for Computing Machinery’s Learning at Scale conference this week, they presented the results of the first three deployments of their system. In all three cases, study participants’ performance on standardized tests of reading preparedness indicated that the tablet use was effective.</p>
<p>The trials examined a range of educational environments. One was set in a pair of rural Ethiopian villages with no schools and no written culture; one was set in a suburban South African school with a student-to-teacher ratio of 60 to 1; and one was set in a rural U.S. school with predominantly low-income students.</p>
<p>In the African deployments, students who used the tablets fared much better on the tests than those who didn’t, and in the U.S. deployment, the students’ scores improved dramatically after four months of using the tablets. "The whole premise of our project is to harness the best science and innovation to bring education to the world’s most underresourced children,"&nbsp;says Cynthia Breazeal, an associate professor of media arts and sciences at MIT and first author on the new paper. "There’s a lot of innovation happening if you happen to be reasonably affluent — meaning you have regular access to an Internet-connected computer or mobile device, so you can get online and access Khan Academy. There’s a lot of innovation happening if you’re around eight years old and can type and move a mouse around. But there’s relatively little innovation happening with the early-childhood-learning age group, and there’s a ton of science saying that that’s where you get tremendous bang for your buck. You’ve got to intervene as early as possible."</p>
<p>Breazeal is joined on the paper by Maryanne Wolf and Stephanie Gottwald, who are, respectively, the director and assistant director of the Center for Reading and Language Research at Tufts; Tinsley Galyean, a research affiliate at the MIT Media Lab and executive director of Curious Learning, a nonprofit organization the researchers created to develop and deploy their system; and Robin Morris, a professor of psychology at Georgia State University.</p>
<p><strong>Self-starting</strong></p>
<p>The concentration on early literacy reflects Wolf’s theory, popularized in her book "Proust and the Squid,"&nbsp;that the capacity to read, unlike the capacity to process spoken language, is not hard-coded into our genes. Consequently, early training is essential to establishing the neurological machinery on which the very capacity for literacy depends.</p>
<p>The researchers’ system consists of an inexpensive tablet computer using Google’s Android operating system. Wolf and Gottwald combed through the literacy and early-childhood apps available for Android devices to identify several hundred that met their quality criteria and addressed a broad enough range of skills to lay a foundation for early reading education. The researchers also developed their own interface for the tablets, which grants users access only to approved educational apps. Across the three deployments, the tablets were issued to children ranging in age from 4 to 11.</p>
<p>"When we do these deployments, we purposely don’t tell the kids how to use the tablets or instruct them about any of the content,dz Breazeal says. DzOur argument is, if you’re going to be able to scale this to reach 100 million kids, you can’t bring people in to coach kids what to do. You just make the tablets available, and they need to figure everything out from then on out. And what we find is, the kids do it. When we first did Ethiopia, we had all these protocols and subprotocols. What if it’s a week and they haven’t turned them on? What if it’s three weeks and they haven’t turned them on? Within minutes, the kids turn them on. By the end of the day, they’ve literally explored every app on the tablet."</p>
<p><strong>Results</strong></p>
<p>The Ethiopian trial, which the researchers conducted in collaboration with the <a href="http://one.laptop.org/">One Laptop per Child</a> program, involved children aged 4 to 11 who had no prior exposure to spoken English or any written language. After a year using the tablets, children were tested on their understanding of roughly 20 spoken English words, taken at random from apps loaded on the tablets. More than half of the students knew at least half the words, and all the students knew at least four.</p>
<p>When presented with strings of Roman letters in a random order, 90 percent could identify at least 10 of them, and all the children could supply the sounds corresponding to at least two of them. Perhaps most important, 35 percent of the children could recognize at least one English word by sight. These figures roughly accord with those of children entering kindergarten in the U.S.</p>
<p>In the South African trial, rising second graders who had been issued tablets the year before were able to sound out four times as many words as those who hadn’t, and in the U.S. trial, which involved only 4-year-olds and lasted only four months,&nbsp;half-day preschool students were able to supply the sounds corresponding to nearly six times as many letters as they had been before the trial.</p>
<p>Since the trials reported in the new paper, Curious Learning has launched new trials in Uganda, Bangladesh, India, and the U.S. In all, 2,000 children have had the opportunity to use the tablets.</p>
<p>Currently, the team is concentrating on analyzing data collected from the trials. Which apps do the children spend most time with? Which apps’ use correlates best with literacy outcomes? Curious Learning is also looking for partners to help launch larger pilot programs, with 5,000 to 10,000 children.</p>
<p>"There’s a core scientific question, which is understanding what the nature of this child-driven, curiosity-driven learning looks like,"&nbsp;Breazeal says. "We need to understand how they learn, which is a fundamentally social process, where they explore the tablet together, they discover things through that exploration, and then they talk-talk-talk-talk, and they share those ideas. So it’s a profoundly social, peer-to-peer-based learning process. We have to have create a technology and an experience that supports that process."</p>
"The whole premise of our project is to harness the best science and innovation to bring education to the world’s most underresourced children," Cynthia Breazeal says. Pictured are students in Ethiopia. Courtesy of CuriousLearning.orgResearch, School of Architecture and Planning, Africa, Developing countries, K-12 education, Learning, Mobile devices, Poverty, education, Education, teaching, academicsControlling RNA in living cellshttp://news.mit.edu/2016/controlling-rna-living-cells-0425
Modular, programmable proteins can be used to track or manipulate gene expression.Mon, 25 Apr 2016 15:00:00 -0400Anne Trafton | MIT News Officehttp://news.mit.edu/2016/controlling-rna-living-cells-0425<p>MIT researchers have devised a new set of proteins that can be customized to bind arbitrary RNA sequences, making it possible to image RNA inside living cells, monitor what a particular RNA strand is doing, and even control RNA activity.</p>
<p>The new strategy is based on human RNA-binding proteins that normally help guide embryonic development. The research team adapted the proteins so that they can be easily targeted to desired RNA sequences.</p>
<p>“You could use these proteins to do measurements of RNA generation, for example, or of the translation of RNA to proteins,” says Edward Boyden, an associate professor of biological engineering and brain and cognitive sciences at the MIT Media Lab. “This could have broad utility throughout biology and bioengineering.”</p>
<p>Unlike previous efforts to control RNA with proteins, the new MIT system consists of modular components, which the researchers believe will make it easier to perform a wide variety of RNA manipulations.</p>
<p>“Modularity is one of the core design principles of engineering. If you can make things out of repeatable parts, you don’t have to agonize over the design. You simply build things out of predictable, linkable units,” says Boyden, who is also a member of MIT’s McGovern Institute for Brain Research.</p>
<p>Boyden is the senior author of a paper describing the new system in the <em>Proceedings of the National Academy of Sciences</em>. The paper’s lead authors are postdoc Katarzyna Adamala and grad student Daniel Martin-Alarcon.</p>
<p><strong>Modular code</strong></p>
<p>Living cells contain many types of RNA that perform different roles. One of the best known varieties is messenger RNA (mRNA), which is copied from DNA and carries protein-coding information to cell structures called ribosomes, where mRNA directs protein assembly in a process called translation. Monitoring mRNA could tell scientists a great deal about which genes are being expressed in a cell, and tweaking the translation of mRNA would allow them to alter gene expression without having to modify the cell’s DNA.</p>
<p>To achieve this, the MIT team set out to adapt naturally occurring proteins called Pumilio homology domains. These RNA-binding proteins include sequences of amino acids that bind to one of the ribonucleotide bases or “letters” that make up RNA sequences — adenine (A), cysteine (C), uracil (U), and guanine (G).</p>
<p>In recent years, scientists have been working on developing these proteins for experimental use, but until now it was more of a trial-and-error process to create proteins that would bind to a particular RNA sequence.</p>
<p>“It was not a truly modular code,” Boyden says, referring to the protein’s amino acid sequences. “You still had to tweak it on a case-by-case basis. Whereas now, given an RNA sequence, you can specify on paper a protein to target it.”</p>
<p>To create their code, the researchers tested out many amino acid combinations and found a particular set of amino acids that will bind each of the four bases at any position in the target sequence. Using this system, which they call Pumby (for Pumilio-based assembly), the researchers effectively targeted RNA sequences varying in length from six to 18 bases.</p>
<p>“I think it’s a breakthrough technology that they’ve developed here,” says Robert Singer, a professor of anatomy and structural biology, cell biology, and neuroscience at Albert Einstein College of Medicine, who was not involved in the research. “Everything that’s been done to target RNA so far requires modifying the RNA you want to target by attaching a sequence that binds to a specific protein. With this technique you just design the protein alone, so there’s no need to modify the RNA, which means you could target any RNA in any cell.”</p>
<p><strong>RNA manipulation</strong></p>
<p>In experiments in human cells grown in a lab dish, the researchers showed that they could accurately label mRNA molecules and determine how frequently they are being translated. First, they designed two Pumby proteins that would bind to adjacent RNA sequences. Each protein is also attached to half of a green fluorescent protein (GFP) molecule. When both proteins find their target sequence, the GFP molecules join and become fluorescent — a signal to the researchers that the target RNA is present.</p>
<p>Furthermore, the team discovered that each time an mRNA molecule is translated, the GFP gets knocked off, and when translation is finished, another GFP binds to it, enhancing the overall fluorescent signal. This allows the researchers to calculate how often the mRNA is being read.</p>
<p>This system can also be used to stimulate translation of a target mRNA. To achieve that, the researchers attached a protein called a translation initiator to the Pumby protein. This allowed them to dramatically increase translation of an mRNA molecule that normally wouldn’t be read frequently.</p>
<p>“We can turn up the translation of arbitrary genes in the cell without having to modify the genome at all,” Martin-Alarcon says.</p>
<p>The researchers are now working toward using this system to label different mRNA molecules inside neurons, allowing them to test the idea that mRNAs for different genes are stored in different parts of the neuron, helping the cell to remain poised to perform functions such as storing new memories. “Until now it’s been very difficult to watch what’s happening with those mRNAs, or to control them,” Boyden says.</p>
<p>These RNA-binding proteins could also be used to build molecular assembly lines that would bring together enzymes needed to perform a series of reactions that produce a drug or another molecule of interest.</p>
An illustration of RNA. Illustration: Christine Daniloff/MITResearch, Biological engineering, Brain and cognitive sciences, Genetics, School of Engineering, School of Science, School of Architecture and Planning, McGovern Institute, Media LabCollegiate inventors awarded 2016 Lemelson-MIT Student Prizehttp://news.mit.edu/2016/collegiate-inventors-awarded-lemelson-mit-student-prize-0412
Students recognized for inventive solutions to challenges in health care, transportation, consumer devices, food, and agriculture.
Tue, 12 Apr 2016 13:00:01 -0400Stephanie Martinovich | Lemelson-MIT Programhttp://news.mit.edu/2016/collegiate-inventors-awarded-lemelson-mit-student-prize-0412<p>The <a href="http://lemelson.mit.edu" target="_blank">Lemelson-MIT Program</a> today announced the winners of the <a href="http://lemelson.mit.edu/studentprize" target="_blank">Lemelson-MIT Student Prize</a>, a nationwide search for the most inventive college students. The Lemelson-MIT Program awarded $90,000 in prizes to collegiate inventors. Each winning team of undergraduates received $10,000, and each graduate student winner received $15,000. The winners of this year’s competition were selected from a diverse and highly-competitive applicant pool of students from 77 colleges and universities across the country.&nbsp;</p>
<p>“This year’s Lemelson-MIT Student Prize winners have outstanding portfolios of inventive work,” said Michael Cima, faculty director of the Lemelson-MIT Program. “Their passion for solving problems through invention is matched by their commitment to mentoring the next generation of inventors.”</p>
<p>The Lemelson-MIT Student Prize is a national collegiate invention prize program, supported by The Lemelson Foundation, serving as a catalyst for burgeoning young inventors.<br />
<br />
“My husband Jerome always felt passionate about the potential of young collegiate inventors,” said Dorothy Lemelson, chair of The Lemelson Foundation. “The Lemelson-MIT Student Prize has evolved over the past 20 years to encourage and inspire students around the country to develop their ideas into viable products.”&nbsp;</p>
<p><strong>2016 Lemelson-MIT Student Prize Winners </strong></p>
<p>The “Cure it!” Lemelson-MIT Student Prize rewards students working on technology-based inventions that can improve health care. The winners are:</p>
<ul>
<li><strong>Catalin Voss, </strong><strong>Stanford University, $15,000 Lemelson-MIT “Cure it!” Graduate Winner.</strong> Voss developed the Autism Glass Project, an emotional learning aid for children with autism based on smart glasses like Google Glass. An individual with autism puts on the glasses and they automatically recognize emotions in other people’s faces using an artificial intelligence system. They then give intelligent social cues to the child right then and there via a heads-up display or audio.</li>
<li><strong>Jason Kang, Katherine Jin and Kevin Tyan, Columbia University, $10,000 Lemelson-MIT “Cure it!” Undergraduate Team Winner.</strong> Kang, studying in Columbia’s School of Engineering, along with Jin and Tyan, formed a startup, Kinnos Inc., to develop Highlight, an easy-to-use powdered additive that can be mixed into disinfectant solutions to make them colorized and highly visible. Their invention allows global health care workers to fully cover contaminated surfaces with disinfectant solutions, eliminating gaps in coverage and reducing evaporation rates. Highlight improves the process of infectious disease decontamination by directly addressing the problems of human error and empowering health care workers to protect themselves and the general public.</li>
</ul>
<p>The “Drive it!” Lemelson-MIT Student Prize rewards students working on technology-based inventions that can improve transportation. The winner is:</p>
<ul>
<li><strong>Dan Dorsch, Massachusetts Institute of Technology, $15,000 Lemelson-MIT “Drive it!” Graduate Winner.</strong> Dorsch invented the world’s first lightweight clutchless transmission for high-performance hybrid vehicles, which are designed to match the performance of existing supercars while achieving higher efficiency. Dorsch has partnered with a leading performance car company to refine his transmission technology for real-world applications. Dorsch believes it will be straightforward for other automotive manufacturers to adapt his technology for their vehicles, creating greater efficiency and performance in mass consumer models.</li>
</ul>
<p>The “Eat it!” Lemelson-MIT Student Prize rewards students working on technology<strong>-</strong>based inventions that can improve food and agriculture. The winners are:</p>
<ul>
<li><strong>Heather Hava, University of Colorado at Boulder, $15,000 Lemelson-MIT “Eat it!” Graduate Winner.</strong> Hava, a self-proclaimed “space gardener,” has focused her studies in bioastronautics, and specifically inventing ways to grow food in space and other extreme environments. She developed robots that can garden in space and patented a geodesic dome structure for on-Earth applications including use for disaster relief, sustainable housing and horticulture. Her invention SmartPot (SPOT), a smart growth chamber, can be teleoperated to help astronauts grow fruits and vegetables during space exploration missions. AgQ, also developed by Hava, is software that will process data from SPOT and provide feedback to the robot for proper plant care.</li>
<li><strong>Kale Rogers, Michael Farid, Braden Knight, and Luke Schlueter, Massachusetts Institute of Technology, $10,000 Lemelson-MIT “Eat it!” Undergraduate Team Winner. </strong>Mechanical engineering students Rogers, Farid, Knight and Schlueter created Spyce Kitchen, the world’s first completely automated restaurant. The invention incorporates a refrigerator, dishwasher, stovetop and chef all-in-one, allowing it to cook and serve meals using fresh ingredients without human involvement. The team believes Spyce Kitchen will revolutionize the fast food industry by operating with extremely low overhead while serving high quality, nutritious meals at fast food prices.&nbsp;</li>
</ul>
<p>The “Use it!” Lemelson-MIT Student Prize rewards students working on technology-based inventions that can improve consumer devices. The winners are:</p>
<ul>
<li><strong>Achuta Kadambi, Massachusetts Institute of Technology, $15,000 Lemelson-MIT “Use it!” Graduate Winner.</strong> Kadambi designs advanced cameras that acquire superhuman imagery — he believes that the camera should exceed rather than mimic the human eye. His inventions include ultrafast optics to film light in motion (“Nanophotography”) and an imaging system that relates nearly imperceptible rotations of light with 3-D models of the world (“Polarized 3-D”). At the intersection of electrical engineering, computer science, and optics, Kadambi’s work has applications that span medical imaging, robotic navigation, and virtual reality.</li>
<li><strong>Thomas Pryor and Navid Azodi, University of Washington, $10,000 Lemelson-MIT “Use it!” Undergraduate Winners</strong>. Pryor and Azodi created SignAloud, a pair of gloves that have the potential to revolutionize communication for people who cannot speak or hear. SignAloud gloves contain an array of sensors that measure hand position and movement, sending sensor data via Bluetooth for translation from American Sign Language to spoken words instantly. The gloves are lightweight, compact, and worn on the hands, but ergonomic enough to use as an everyday accessory, similar to contact lenses or a hearing aid.</li>
</ul>
<p>Lemelson-MIT Student Prize applicants were evaluated by screening committees with expertise in the invention categories as well as a national judging panel of industry leaders — who also select the annual <a href="http://lemelson.mit.edu/prize" target="_blank">$500,000 Lemelson-MIT Prize</a> winner. Screeners and judges assessed candidates on breadth and depth of inventiveness and creativity; potential for societal benefit and economic commercial success; community and environmental systems impact; and experience as a role model for youth.</p>
<p>Students interested in applying for the 2017 Lemelson-MIT Student Prize can find more information on the <a href="http://lemelson.mit.edu/studentprize" target="_blank">Lemelson-MIT website</a>.</p>
<p>The Lemelson-MIT Program is also <a href="http://lemelson.mit.edu/get-involved" target="_blank">seeking partners</a> with interest in sponsoring the competition, in addition to supporting the execution and scaling into new categories.</p>
Dan Dorsch of MIT is the $15,000 Lemelson-MIT “Drive it!” Graduate Winner for his invention, the first lightweight clutchless hybrid transmission for cars.Photo courtesy of the Lemelson-MIT Program.Awards, honors and fellowships, Lemelson-MIT, Invention, Media Lab, Mechanical engineering, School of Engineering, School of Architecture and PlanningExperiencing underwater worlds, virtually http://news.mit.edu/2016/experiencing-underwater-worlds-virtually-0411
&quot;Amphibian&quot; SCUBA simulator advances the field of virtual reality while exploring the relationship between diving and disability.Mon, 11 Apr 2016 16:20:01 -0400Sharon Lacey | Arts at MIThttp://news.mit.edu/2016/experiencing-underwater-worlds-virtually-0411<p>“My diving bell becomes less oppressive, and my mind takes flight like a butterfly,” Jean-Dominique Bauby&nbsp;wrote in his agonizingly beautiful account of living with severe disabilities, <em>The Diving Bell and the Butterfly</em>. Holding these contrasting images of physical submersion and mental liberation in your mind is a useful way to approach Dhruv Jain’s virtual reality project, <a href="http://web.media.mit.edu/~djain/project-amphibian.html" target="_blank">Amphibian</a>. Jain, a master of science candidate in the MIT Media Lab's <a href="http://living.media.mit.edu/" target="_blank">Living Mobile Group</a> who is partially deaf, created this SCUBA diving simulator to help abled people understand in an artistic way the liberating effects of disabilities, which he likens to the experience of being underwater. &nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;</p>
<p>“Underwater, our vital senses are dulled and warped. Visual range is limited and magnification is distorted. The senses of smell, taste and touch are severely muted. Hearing is distorted too, since sound travels five times faster in the water," he says. "But in these conditions, I experienced the kind of peace one can only expect to feel with the freedom of weightlessness. Having done 54 dives in the last 10 months, I can call myself a good diver now. And yet every time I dive, I find the underwater experience to be rejuvenating; it is emotional and almost spiritual.”&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;</p>
<div class="cms-placeholder-content-video"></div>
<p>These experiences compelled him to explore the connection between diving and disability. While he finds diving to be a peaceful endeavor, he realized it is too physically and mentally challenging for many people. So, he resolved to build an “experience machine” that places people in an ocean environment without their having to leave dry land. Under his faculty advisor, principal research scientist and Living Mobile Group head <a href="http://www.media.mit.edu/people/geek" target="_blank">Chris Schmandt</a>, Jain and his team constructed Amphibian. As the name suggests, the project allows users to straddle aquatic and terrestrial worlds by providing an immersive SCUBA diving experience through a water-free simulator. &nbsp;</p>
<p>Most existing SCUBA diving simulations in virtual reality (VR) are limited to visual and aural displays. Amphibian advances the field of VR by engaging additional sensory modalities like thermoception (sense of temperature), equilibrioception (sense of balance), and proprioception (sense of spatial orientation and movement). Users lie on their torsos on a motion platform with outstretched arms and legs placed in a suspended harness. An Oculus Rift head-mounted display and headphones allow them to see and hear the underwater environment. Various sensors are used to simulate buoyancy, drag, and temperature during the simulation. For example, Peltier modules attached to participants’ wrists through motion tracking gloves simulate temperature changes as they dive deeper into the water. An inflatable airbag placed under the torso allows the user to control ascent or descent through breathing; inhaling makes the airbag inflate and exhaling makes it deflate. The virtual body rises up and down in sync. This is done using a gas sensor attached to a snorkel that measures the amount of air inhaled or exhaled.</p>
<p>To navigate in the virtual world, the user wears gloves with embedded flex sensors and inertial measurement units (IMUs) that track each hand's movement. Leg motion is also tracked via IMUs, and users swim forward by kicking their legs up and down. All sensor data is fed into a processing unit that converts the physical motion of hands and legs into virtual movement in the Oculus app. Another novel contribution of the system is a two-way interaction between the user and the underwater virtual world. For example, users can grasp underwater objects such as rocks or crabs using their hands in a grab gesture and sense physical feedback. The sensation of picking up something is conveyed through an inflatable pad on the palm of each glove, which simulates the texture and shape of the object grabbed.</p>
<p>By combining available technologies in a unique way to accommodate multiple senses, the system has the potential to offer a high degree of presence in VR. (“Presence” refers to that function of immersion that increases the user’s sense of “being there.”) One of Jain’s team members, <a href="http://www.media.mit.edu/~sra" target="_blank">Misha Sra</a>, reflects on how she overcame her petrifying fear of water because of Amphibian’s verisimilitude. After a traumatic experience in a pool in college, just being near water filled her with dread. She says, “I wished I could dive but to do that, I first had to get over my fear of water and learn how to swim, which seemed like an insurmountable hurdle. Amphibian was a great middle ground.”</p>
<p>Amphibian is not Jain’s first technological system to simulate sensory deprivation nor his first artwork designed to generate empathy for disabled persons. For one of his previous projects, <a href="http://web.media.mit.edu/~djain/art-blind-emporium.html" target="_blank">Blind Emporium</a>, he created a sensory deprivation room in which outside sound and light were completely blocked, forcing participants to navigate using touch and following sounds emitted from selected objects. He found that in projects that “explore absence,” people still tended to focus on what they lacked instead of what they might gain, so he decided to alter his approach with Amphibian. “People have a negative view of blindness or deafness, thinking that they would find it too difficult to navigate in the world, but my goal was to make people realize that disability is also liberating. When I turn off my hearing aid and close my eyes, I go into a deep meditative state,” Jain says. “Being underwater is so freeing, I thought through diving I could make people better understand the effect of disabilities.”</p>
<p>“Immersive Terrestrial SCUBA Diving Using Virtual Reality,” the paper explaining the technology behind Amphibian, was accepted to the “<a href="http://chi2016.acm.org/" target="_blank">ACM Conference on Human Factors in Computing Systems (CHI)</a>,” which takes place May 7-12, in San Jose, California. Jain received <a href="http://arts.mit.edu/welcome/camit/what-we-do/camit-grants-program/" target="_blank">grant funding from the MIT Council for the Arts</a> for the project, and will exhibit the work in the MIT Museum Studio Compton Gallery. &nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;</p>
The Amphibian SCUBA diving simulator, a research project from the MIT Media Lab, lets users experience the underwater world through a high presence virtual reality system. The system includes a motion platform, Oculus Rift head-mounted display, snorkel with sensors, leg-motion sensors, and gloves that enable motion detection, temperature simulation, and physical feedback of objects.Photo: James DayArts, Media Lab, Technology and society, Assistive technology, Council for the Arts at MIT, School of Architecture and Planning, Students, graduate, Graduate, postdoctoralUsing new models and big data to better understand financial riskhttp://news.mit.edu/2016/using-new-models-big-data-to-better-understand-financial-risk-0411
Bringing together engineers, data theorists, mathematicians, economists, biologists, and policy experts, IDSS is looking at financial risk through a multidisciplinary lens.Mon, 11 Apr 2016 15:25:01 -0400Jennifer Donovan | Institute for Data, Systems, and Societyhttp://news.mit.edu/2016/using-new-models-big-data-to-better-understand-financial-risk-0411<p>The financial crisis of 2008, which saw the failure of major investment banks Bear Stearns and Lehman Brothers, and the subsequent government bailout of insurance giant American International Group (AIG), had a ripple effect around the globe. How did America’s housing collapse lead to the downfall of these institutions? And why did that, in turn, translate into a severe economic downturn?</p>
<p>Not having a clear picture of systemic risk in the financial system, an issue encapsulated in the “too big to fail” interventions, is widely cited as the reason for this financial contagion — the chain reaction of failures between connected parties. However, in the nearly eight years since the crisis, with additional upheavals from the sovereign debt crisis and flash crashes that have followed, researchers and regulators are still teasing out the nuances of risk in a globally connected market, while exploring new ways to manage a system that is evolving at an unprecedented pace.</p>
<p>Researchers at MIT's Institute for Data, Systems, and Society (IDSS) have been a big part of these efforts, looking deeply at the problem of systemic risk in finance through a multidisciplinary lens. By bringing together engineers, information theorists, mathematicians, economists, biologists, and policy experts, IDSS has the opportunity to reframe the way the system is viewed. The goal is to generate new questions, better models, and, ultimately, a more robust and resilient financial system.</p>
<p><strong>The financial ecosystem</strong></p>
<p>A central theme across IDSS research is the idea of using a systems approach to analysis. In the case of the financial system this means taking a wide view, accounting for linkages and their effects across the entire system, as opposed to focusing on individual banks or market subsections.</p>
<p>“When an ecologist is asked to help manage a particular ecology they think not just about the particular plants or animals they’re raising, they think about the bacteria in the soil and the sources of food in the system. I think that’s what we’re missing now when we think about financial regulation — we don’t think about the system as a system,” Andrew Lo, the Charles E. and Susan T. Harris Professor at the MIT Sloan School of Management, said in a recent interview for the <em>Journal of Financial Planning</em>.</p>
<p>Lo, one of the faculty leads of the finance efforts at IDSS, has been advocating this approach to financial risk analysis — termed the adaptive markets hypothesis (AMH) — for more than a decade. He and his colleagues use principles from evolutionary biology to draw parallels between the observed dynamics of financial systems and those of ecosystems. The idea is to move regulators and investors away from viewing markets as physical systems — rational, immutable, efficient, and mechanistic — towards a more complex model: a “highly adaptive organic system” that is directly impacted by human decisions and behavior. Regulators and and the regulated respond to each other, and their strategies co-evolve over time.</p>
<p>This approach has particular relevance in light of the 2008 crisis, and continues to gain legitimacy as more advances are made in understanding drivers of individual decision making. Work by IDSS faculty member Alex “Sandy” Pentland, the Toshiba Professor of Media Arts and Sciences, for instance, also uses biological concepts to inform his research about financial behaviors. Yielding a non-traditional set of risk measures, Pentland and his colleagues found that “financial outcomes for individuals are intricately linked with their spatio-temporal traits,” meaning that the frequency and location of a person’s spending has strong predictive value about their propensity to overspend or miss payments. This is analogous to the interconnections between animal foraging behavior and their life outcomes, and has powerful implications for making better financial decisions, on both the individual and institutional levels.</p>
<p>In an opinion piece for the <em>Proceedings of the National Academy of Arts and Sciences</em>, Lo and his collaborator Simon Levin of Princeton University “propose that the financial system has crossed a threshold of complexity where the system is evolving faster than regulators and regulations can keep pace,” necessitating a new, interdisciplinary paradigm for modeling and predicting system-wide risk.</p>
<p><strong>New ways to measure risk</strong></p>
<p>An instrumental feature of the ecosystem model is its capacity to detail complexity — both of the system’s components and their interactions. The network modeling of systems does the same, but from an engineering perspective.</p>
<p>“The issue of how individual level shocks can propagate, amplify, and create systemic risk is clearly a systems question,” says Asu Ozdaglar, director of the Laboratory for Information and Decision Systems (LIDS) and a faculty lead of the IDSS finance efforts. “Decades of research at LIDS and MIT School of Engineering, which has studied systems approaches and how these create stabilities or instabilities under different circumstances, is highly relevant to understanding systemic risk.”</p>
<p>In some of her most recent research, Ozdalgar, the Joseph F. and Nancy P. Keithley Professor in Electrical Engineering; Daron Acemoglu, the Elizabeth and James Killian Professor of Economics; and their colleague Alireza Tahbaz-Salehi of Columbia University, explore the relationship between network architectures and systemic risk. Their research demonstrates that network architecture as a whole, rather than an individual component’s number and quality of connections, can be a better indicator of when shocks to a system might propagate. By modeling the effect of small shocks (a few defaulted loans, say) compared with large shocks (such as multi-bank failures) on different types of networks, Ozdaglar, Acemoglu, and Tahbaz-Salehi show key features of financial contagion. Their results indicate that densely connected networks are well-equipped to deal with small shocks — diversification helps to absorb them — whereas, sparse, or less-connected networks are less able to do so. Interestingly, however, there is a “phase transition,” and when shock size crosses a certain threshold it is the dense networks that do poorly — the interconnections facilitate contagion — while the more isolated connections in sparse networks stop failures from spreading. Ozdaglar writes in an article for MIT’s <em>EECS Connector</em>, “Financial interconnections create stability in response to small shocks but become powerful dominoes when shocks are large.”</p>
<p><strong>Using data to see the big picture</strong></p>
<p>Equally important to understanding systemic risk are the data underpinning the models. In order to fully appreciate the forces driving market behavior it is essential to have data that are coherent, coordinated across sectors, and accessible. This was made clear in the wake of the 2008 crisis: At that time, even systemically important institutions, such as Lehman or AIG, were not required to share critical risk data with regulatory agencies, making it impossible to detect early signs of trouble or to implement rapid resolution plans. The Dodd-Frank Wall Street Reform and Consumer Protection Act, passed in 2010, changed this landscape by mandating central reporting of large swaths of data. However, in today’s financial markets this is just one new source of information for regulators to manage. There has also been exponential growth of data sets from other areas, like market intelligence and social media platforms, and Internet search tools.</p>
<p>With all of this newly available data, much of it highly granular, come the challenges of managing and and analyzing it: navigating its sheer size, ensuring its privacy and security for all stakeholders, and being able to derive models from it to inform policy and decision making. One key way IDSS researchers are addressing these challenges is in collaboration with the Consortium for Systemic Risk Analytics (CSRA). CSRA was founded in 2010 by a group of researchers from finance and academia — including Lo — who saw how badly the financial system was affected by the incomplete indicators of systemic risk available in 2008. IDSS, along with the Laboratory for Financial Engineering, the Center for Finance and Policy, and CSRA are collaborating on developing tools, such as open-source software and a public-access systemic risk dashboard, to deepen understanding of systemic risk and to develop new risk analytics that can serve as early warning systems.</p>
<p>A particularly unique challenge in managing financial data is privacy. Unlike many other industries, whose trade knowledge and ideas are patentable and therefore protected, the financial industry’s intellectual property is largely unpatentable, consisting of business processes that are trade secrets and therefore proprietary. This, combined with issues of consumer data privacy, can create a significant obstacle to accessible data. The tension between protecting trade secrets and providing regulators with systemic risk transparency is another topic addressed by IDSS researchers. Lo and Pentland have, for example, with different respective projects, worked on cryptographic computational methods called “secure multiparty computation tools” which allow aggregate risk exposures to be determined without compromising the privacy of any individual institutions; only encrypted information is used by the regulators.</p>
<p>“Big data and machine learning have completely transformed several industries,” says Lo. “I think the same thing is happening to the financial industry. We’re now seeing interconnections among different parts of the system that have never really been visible before. Thanks to the combination of large amounts of data and our ability to analyze that data to develop new narratives, we can now manage risk much more effectively and also identify new sources of value for investors and other financial market participants. It’s launched a whole new golden age of financial innovation and discovery.”</p>
<p><strong>The multidisciplinary approach</strong></p>
<p>IDSS, in drawing talent from many disciplines, allows the financial system to be viewed from multiple vantage points. “Different approaches bring different perspectives, which are always useful,” says Ozdaglar. For instance, “the approach to systemic risk developed originally in LIDS is not only strongly interdisciplinary, but takes a systems approach, which is well catered to the problems at hand.”</p>
<p>What makes the multidisciplinary work at IDSS stand out, though, from the many important and highly collaborative research projects happening at MIT, is its scope. “The way IDSS is organized is around big challenges,” says Lo. “And it’s that scope that makes the effort different from anything that’s ever been done before. We’re focusing on some of the most difficult problems facing society. Systemic risk is not just in the financial system, but it also affects the environment, through climate change, for example. Very large systems are often systems that everyone takes for granted and, therefore, nobody feels responsible for understanding or maintaining them. By focusing squarely on these systemic issues, we can make much more progress than before and have lasting impact, not just on our academic endeavors, but on society itself.”</p>
<p><em>This article is part of a series highlighting major areas of research and innovation at MIT’s new Institute for Data, Systems, and Society.</em></p>
Research, Finance, Engineering Systems, IDSS, Laboratory for Information and Decision Systems (LIDS), Economics, Big data, Risk analysis, Networks, Analytics, Data, Sloan School of Management, School of Engineering, Media Lab, School of Architecture and Planning, Electrical Engineering & Computer Science (eecs), SHASSUsing data from social networks to understand and improve systemshttp://news.mit.edu/2016/idss-researchers-use-data-social-networks-understand-and-improve-variety-systems-0407
Researchers in IDSS are learning how ideas evolve over networks, quantifying the influence of individuals in networks, and making better predictions.Thu, 07 Apr 2016 17:30:01 -0400Stefanie Koperniak | Institute for Data, Systems, and Societyhttp://news.mit.edu/2016/idss-researchers-use-data-social-networks-understand-and-improve-variety-systems-0407<p>In the course of our day-to-day lives, we produce vast amounts of data. Whether gathered through online communications platforms, tracking devices, or other sources, these data reveal information about our behavior, decisions, and preferences. Researchers can ultimately use the data to improve systems across a variety of domains. In the process, there are great challenges and opportunities in the work of understanding the flow of ideas through groups, determining which incentives are effective, measuring network dynamics, and managing the inherent issues of privacy.</p>
<p>MIT's Institute for Data, Systems, and Society (IDSS), which aims to advance research at the intersection of engineering and social science, blends multi-disciplinary expertise in systems theory, economics, political science, algorithmic and computational game theory, and network science. Research merging social science with data processing and analysis examines interactions and dynamics over large networks of interconnected individuals — aiming to understand how ideas evolve over networks, quantify the influence of individuals in the networks, and make better predictions.</p>
<p><strong>Understanding and improving the flow of ideas</strong></p>
<p>At the heart of efforts to unravel some of the complexities and implications of social networks is “connection science,” which brings together application and theory. “Connection science is an attempt to actually connect between data, real-world situations, and theory,” says Alex “Sandy” Pentland, the Toshiba Professor of Media Arts and Sciences at MIT and director of the Human Dynamics Laboratory.</p>
<p>“Out of this, comes the notion of the ‘living lab’; rather than having something happen and we record data and only then try to fit theories to it, we’re looking at something that is ongoing, living,” says Pentland. “We can interact with it to understand it better.”</p>
<p>One particular initiative of Pentland and his team looks at how ideas flow in organizations and communities. Pentland and colleagues have been able to identify certain communication patterns that indicate effective collaborations — providing insights into the seemingly ineffable “chemistry” of high-performing groups, companies, and communities. They designed electronic badges and software for mobile phones that reveal characteristics about participants’ interactions with each other. Although the badges and phones don’t measure the content of conversations, they do measure the communications in terms of patterns — such as who is talking to whom and how much people are speaking. After the researchers analyze the data, they can intervene with feedback and incentives — and then determine whether this leads to better ideas and better decisions.</p>
<p>In addition, the researchers are now applying these approaches toward improving education — both in-person and distance learning — determining how to create the most effective interactions. For more information see <a href="http://connection.mit.edu">connection.mit.edu</a>.</p>
<p><strong>Using social data to make predictions and decisions</strong></p>
<p>Devavrat Shah, professor of electrical engineering and computer science, develops statistical inference algorithms guided by behavioral models from social science to extract meaningful information from social data.</p>
<p>“Social data is the data that we all generate,” says Shah, “as a biproduct of things we do.”</p>
<p>Social data can be generated through purchases, reviews, mobile phone traces, censuses, tweets, posts, and interactions on social marketplaces. These data contain a wealth of information that can be utilized for better social living such as social recommendations, informed policy making, efficient business operations, and uplifting societies by, for example, mobilizing untapped labor forces through crowdsourced platforms.</p>
<p>“We have a unique opportunity where as an engineer and social scientist, we can make a huge impact in shaping the future of our society,” Shah says.</p>
<p>In order to realize this opportunity, we need to address the challenge of how to process social data at scale to extract meaningful, accurate information. Shah has been using mathematical models coming from social sciences to develop computationally efficient inference algorithms. For example, to predict trends in Twitter accurately, he and colleagues utilized non-parametric statistical methods along with a classical model for information diffusion in social networks. The resulting algorithm predicted, with 95 percent accuracy, which topics would be trending an average of an hour and a half in advance, and, at times, four or five hours in advance. Similar non-parametric approach, when married with different behavioral model, leads to efficient prediction algorithm for price of Bitcoin. This resulted in a profitable trading strategy. Using a statistical model — suggested by practitioners Dawid and Skene in 1979 — Shah and colleagues developed algorithms based on graphical models to design economically efficient low-cost crowdsourcing system. The resulting algorithm is utilized in peer-grading platforms for online education and various citizen science projects.</p>
<p>“Understanding human choice is foundational,” Shah says. “It is at the core of our ability to predict the consumer demand, the foundational concept of macroeconomics. In a democratic society, it determines the way we govern. And in modern times, it is what determines how we receive online recommendations and advertisements.”</p>
<p>Shah and colleagues have developed computationally efficient statistical methods for learning the “discrete choice model” from sparse data. This collection of works have resulted into novel ranking (or election) algorithm based on comparison data, recommendation systems and efficient decision-making for business operations. This work is an excellent example, where behavioral models from social science inspired new development in statistical inference. Shah co-founded <a href="http://celect.com" target="_blank">Celect</a>, which has been commercializing research of Shah and colleagues.</p>
<p><strong>Applications to policy in the developing world</strong></p>
<p>These approaches are also implemented to understand how well policies and programs in developing countries are performing — and how they can be improved. Esther Duflo, the Abdul Latif Jameel Professor of Poverty Alleviation and Development in the Department of Economics, and colleagues in the Abdul Latif Jameel Poverty Action Lab (J-PAL) use different types of randomized-control trials to gather data to help determine to what extent certain social policies are achieving their objectives.</p>
<p>“In our work, we are interested in causal effects of a policy or intervention, or sometimes someone’s characteristics, for example their education, on an outcome or a series of outcomes,” says Duflo. “I’m never trying to look at ‘What is the entire model of someone’s behavior?’ I’m always interested in looking at ‘What is the effect of a particular cause that is, in principle, manipulatable or changeable?’”</p>
<p>Duflo and colleagues evaluate a wide variety of different programs and policies aimed at reducing poverty, including microfinance initiatives — trying to determine whether there are data to indicate they are beneficial, and also determine whether there are hindrances in the programs’ efficacy. Participation in a microfinance program may be highly variable, and might depend on different dynamics and interactions within networks, or communities, of people. Duflo and her team developed a model of “word-of-mouth diffusion” and then applied it to data on social networks and participation in a newly available microfinance loan program in 43 Indian villages. The model allowed researchers to distinguish information passing among neighbors from direct influence of neighbors’ participation decisions, as well as information passing by participants versus nonparticipants. The model estimates suggest that participants are seven times as likely to pass information compared to informed nonparticipants, but information passed by nonparticipants still accounts for roughly one-third of eventual participation.</p>
<p>In addition to understanding how well a socio-economic program or policy is working — or not working — and why, data can also be used to understand whether a program can be successfully replicated in other areas of the world. J-PAL is able to evaluate the same program, applied in different areas, at the same time. For example, a comprehensive program designed for Bangladesh was also evaluated in six more countries. The effects were mostly positive, in varying degrees, across the entire the population in the study.</p>
<p><strong>Understanding and predicting sociopolitical change</strong></p>
<p>Complex questions related to political change, cultural dynamics, and societal transformation require an innovative new set of theory, modeling, field experiments, and algorithms. Understanding and predicting sociopolitical change requires a new set of tools and a multidisciplinary analytical framework.</p>
<p>The Department of Defense Multi-Investigator University Research Initiative (MURI) project brings together a team of researchers to address this challenge. This major collaboration involves a number IDSS principal investigators, including: Munther Dahleh, Daron Acemoglu, Fotini Christia, Munther Dahleh, Ali Jadbabaie, and Asuman Ozdaglar. They have developed a framework to study collective action and collective decisions, including how local interactions among individuals and groups with different information, levels of prominence, and preferences results in the spread of information and actions. By developing theories of cascade and contagion in conjunction with field surveys and experiments, IDSS PIs are investigating social and political changes in societies (such as Arab Spring events), using theories and a wide range of datasets ranging from online social networks such as Twitter, Facebook, and LinkedIn, to data from Afghanistan, Iraq, and Yemen.</p>
<p>For example, as part of this project, researchers analyzed three years’ worth of cellphone call metadata from Yemen — January 2010 to October 2012 — to determine the effect of events such as drone strikes and protests on call patterns. The data also provide valuable insights into Yemeni culture and day-to-day life. For example, the research has provided clues to effect of drone strikes on movement patterns and social ties and have opened up a window to the study of the effect of such shocks on how people communicate and how news of such events spreads. In other parts of this project, the PIs have investigated issues related to collective decision making and collective action, specifically, questions about how individuals make decisions by combining individual observations and opinions of others and how social cascades occur.</p>
<p>“This project is really an example of what IDSS is all about: It involves data, systems, and societal elements,” says Ali Jadbabaie, associate director of IDSS. “It brings together political scientists, economists, systems theorists, data scientists, and computer scientists to address important societal questions.”</p>
<p><em>This article is part of a series highlighting major areas of research and innovation at MIT’s new Institute for Data, Systems, and Society (IDSS).</em></p>
IDSS, Data, Engineering Systems, Social sciences, Economics, Political science, Networks, Game theory, Analytics, Electrical Engineering & Computer Science (eecs), Media Lab, Abdul Latif Jameel Poverty Action Lab (J-PAL), SHASS, School of Architecture and Planning, School of Engineering, ResearchApril 12 symposium: Take an immersive, intellectual journey across campushttp://news.mit.edu/2016/april-12-symposium-immersive-intellectual-journey-across-campus-0405
Beyond 2016: MIT’s Frontiers of the Future event offers a playful introduction to research at MIT.Tue, 05 Apr 2016 12:50:01 -0400MIT Institute Eventshttp://news.mit.edu/2016/april-12-symposium-immersive-intellectual-journey-across-campus-0405<p>When MIT moved from Boston to Cambridge in 1916, it built a new campus designed to foster collaboration across disparate disciplines. As the Institute celebrates the centennial of that historic move, more than a dozen faculty from multiple departments across all five schools will gather for a symposium in Kresge Auditorium on Tuesday, April 12, to present short, exciting talks on their groundbreaking research — tied together by an immersive, multimedia campus tour by foot, drone, and skateboard. Come explore!</p>
<p>President L. Rafael Reif will open the symposium session at 1:30 p.m., preceded by lunch and a graduate student poster session starting at noon. The faculty talks and multimedia tour run two hours (1:30-3:30 p.m.) and will be followed by a reception in Kresge Lobby from 3:30 to 5 p.m. <a href="https://www.regonline.com/builder/site/Default.aspx?EventID=1813625" target="_blank">Registration</a>, including lunch and reception, is free for MIT staff, faculty, and students, and $20 for other attendees. Advance registration is encouraged and will be available until 11:59 p.m. on Thursday, April 7. After the deadline, registration will be available onsite on April 12. Contact <a href="mailto:conferences@mit.edu">MIT Conference Services</a> with questions.</p>
<p><strong>Symposium program (1:30-3:30 p.m.)</strong></p>
<p>Welcome<br />
L. Rafael Reif, MIT president<br />
&nbsp;<br />
"Emerging Markets Drive Global Solutions"<br />
Amos Winter, assistant professor in the Department of Mechanical Engineering</p>
<p>"Fluid Dynamics of Infectious Disease Transmission"<br />
Lydia Bourouiba, Esther and Harold E. Edgerton Career Development Professor in the Department of Civil and Environmental Engineering</p>
<p>"Exploring Quantum Behavior in Flatland"<br />
Pablo Jarillo-Herrero, Mitsui Career Development Associate Professor of Physics<br />
&nbsp;<br />
"Uncovering Photosynthesis at the Nanoscale"<br />
Gabriela Schlau-Cohen, assistant professor in the Department of Chemistry</p>
<p>"What Inventions Are We Missing?"<br />
Heidi Williams, Class of 1957 Career Development Assistant Professor, Economics</p>
<p>"Mobile Technologies and Financial Inclusion in Africa"<br />
Tavneet Suri, Maurice J. Strong Career Development Associate Professor in the MIT Sloan School of Management</p>
<p>"Rethinking China’s Growth Model"<br />
Yasheng Huang, International Program Professor in Chinese Economy and Business and associate dean of the MIT Sloan School of Management</p>
<p>"From Nature-inspired Design to Design-inspired Nature"<br />
Neri Oxman, Sony Corporation Career Development Associate Professor in the MIT Media Lab<br />
&nbsp;<br />
"Using Biology for Chemistry’s Sake"<br />
Kristala Prather, Theodore T. Miller Associate Professor in the Department of Chemical Engineering</p>
<p>"Wireless Systems that Extend Our Senses"<br />
Dina Katabi, Andrew and Erna Viterbi Professor in the Department of Electrical Engineering and Computer Science</p>
<p>"Where the Wild Things Will Be (in 100 Years)"<br />
Katharina Ribbeck, Eugene Bell Career Development Professor of Tissue Engineering in the Department of Biological Engineering</p>
<p>"Is There Music at MIT?"<br />
Marcus Thompson, Institute Professor and Robert R. Taylor Professor of Music</p>
<p>"Cities of a New Future"<br />
John Fernandez, associate professor in the Department of Architecture</p>
<p>Closing Remarks<br />
Rebecca Saxe, symposium cochair and professor of cognitive neuroscience in the Department of Brain and Cognitive Sciences</p>
<p>John Ochsendorf, chair of the MIT2016 Steering Committee, symposium cochair, and Class of 1942 Professor in the departments of Architecture and Civil and Environmental Engineering<br />
&nbsp;<br />
The symposium is part of <a href="http://mit2016.mit.edu/" target="_blank">MIT2016: Celebrating a Century in Cambridge</a>, a program running Feb. 29 to June 4 as MIT commemorates 100 years at its “new” campus.</p>
<div class="cms-placeholder-content-video"></div>
Graphic: Tim Blackburn. Photos: Christopher HartingSpecial events and guest speakers, Faculty, Research, Century in Cambridge, Architecture, Arts, Biological engineering, Brain and cognitive sciences, Chemistry, Chemical engineering, Civil and environmental engineering, Economics, Electrical Engineering & Computer Science (eecs), Mechanical engineering, Media Lab, Music, Theater, Physics, School of Architecture and Planning, School of Engineering, SHASS, School of Science, Sloan School of ManagementMIT named No. 1 university worldwide for architecture, No. 2 for art and designhttp://news.mit.edu/2016/mit-named-no-1-university-worldwide-architecture-no-2-art-and-design-0404
QS World University Rankings give top ratings to MIT in architecture, arts programs. Mon, 04 Apr 2016 18:05:01 -0400School of Architecture and Planninghttp://news.mit.edu/2016/mit-named-no-1-university-worldwide-architecture-no-2-art-and-design-0404<p>For the second year in a row, MIT has been named the top university in the world for architecture/built environment in the latest subject rankings from QS World University Rankings. In art and design, the Institute ranked No. 2 globally, a jump from fourth position in 2015. Ten other subject areas at MIT were ranked No.&nbsp;1.</p>
<p>“This ranking is testament to the success of the MIT model of research and teaching and its global commitment to address complex societal problems,” says School of Architecture and Planning Dean Hashim Sarkis. “MIT’s impact is only possible through a combination of advanced research and broad interdisciplinary collaboration. The faculty, staff, and students are to be congratulated on all the hard work that leads to this recognition.”</p>
<p>According to survey sponsor Quacquarelli Symonds, an education organization based in the U.K., the annually published subject area rankings “aim to help prospective students identify the world’s leading schools in their chosen field.”</p>
<p>On the high ranking of the arts at MIT, Associate Provost Philip S. Khoury says:</p>
<p>“The arts thrive at MIT because of our commitment to cross-disciplinary study and to a curriculum based in experimentation and imaginative problem solving. Studying the arts, in combination with the extraordinary discoveries in science and engineering, prepares our students to make creative and innovative contributions in multiple fields.”</p>
<p>The QS rankings reflect academic reputation, reputation with employers, and research impact. Reputation measurements are based on surveys of academics and employers; the 2016 survey polled 76,798 academics from around the globe, along with 44,426 graduate employers.</p>
<p>In addition to architecture and art and design, MIT also ranked highly (No. 5 or higher) in the following subject areas for 2016: linguistics, chemical engineering, civil and structural engineering, computer science and information systems, electrical engineering, mechanical/aeronautical/manufacturing engineering, mineral and mining engineering, biological sciences, physics and astronomy, mathematics, environmental science, Earth and marine sciences, chemistry, materials science, accounting and finance, statistics and operational research, and economics.</p>
Rankings, Arts, Design, Architecture, Urban studies and planning, School of Architecture and PlanningPresident Serzh A. Sargsyan of Armenia visits MIThttp://news.mit.edu/2016/president-serzh-sargsyan-armenia-visits-0330
Tour includes meeting with MIT President L. Rafael Reif, visit to Media Lab.Wed, 30 Mar 2016 17:00:00 -0400Peter Dizikes | MIT News Officehttp://news.mit.edu/2016/president-serzh-sargsyan-armenia-visits-0330<p>President of Armenia Serzh A. Sargsyan made an innovation-centered visit to MIT on Tuesday, meeting Institute leaders and viewing demonstrations of new technologies at the MIT Media Lab.</p>
<p>The visit began with a meeting between Sargsyan and MIT President L. Rafael Reif, who noted the contributions Armenian scholars had made at MIT and expressed the Institute’s interest in continued collaboration with Armenia. &nbsp;</p>
<p>Sargsyan also toured the Media Lab, where researchers work on a diverse range of experimental technologies; his visit there included an introductory discussion with Media Lab co-founder Nicholas Negroponte.</p>
<p>“If you want to do something impossible, come here,” Negroponte said, summarizing the lab’s philosophy.</p>
<p>Sargsyan also gave an evening talk in MIT’s Samberg Conference Center, at an event hosted by the Luys Foundation, which funds scholarships for Armenian students, and the Noubar and Anna Afeyan Foundation, a philanthropic organization based in Massachusetts. Noubar Afeyan, a senior lecturer at the Martin Trust Center for MIT Entrepreneurship, is a prominent executive and investor in the biomedical sector.</p>
<p>“With a wealth of teaching tradition, innovative ideas, and achievements, your Institute has made an essential contribution to the development of humanity,” said Sargsyan at the event, in remarks released by the Armenian government.</p>
<p>Sargsyan added: “In the 155 years of its existence, this Institute has not only withstood and adapted to but also become a leader of constant progress and change.”</p>
<p><strong>Media Lab philosophy</strong></p>
<p>In their sit-down discussion, Sargsyan and Negroponte discussed a variety of topics, including the value of having a broad portfolio of research projects.</p>
<p>“Even if something fails, the activity that goes on is still so important to everybody,” Negroponte said, emphasizing that people can still become better researchers even when working on projects that are not fully commercialized.</p>
<p>Negroponte also outlined the Media Lab’s relationships with corporate backers, who help fund many projects while giving lab researchers free rein to do their work.</p>
<p>During the conversation, Sargsyan inquired specifically how companies find out about the potential usefulness of the technologies that are being developed at the Media Lab. Negroponte responded that firms should send representatives to immerse themselves in the work being done at the lab.</p>
<p>The best way for those executives to “extract value” from the place, Negroponte emphasized, was to “be like a student … and integrate themselves into a lab.”</p>
<p><strong>Thinking a move ahead</strong></p>
<p>Sargsyan, 61, is in his second term as the president of Armenia. He was first elected in 2008 and re-elected in 2013.</p>
<p>Sargsyan was also shown demonstrations of projects from multiple Media Lab research groups, including the Tangible Media group, directed by Hiroshi Ishii; the Lifelong Kindergarten group, directed by Mitchel Resnick; and the Responsive Environments group, directed by Joseph Paradiso. He was also given a tour of the Fab Lab on the ground floor of the Media Lab, by Sherry Lassiter, program manager for MIT’s Center for Bits and Atoms.</p>
<p>Resnick gave Sargsyan a demonstration of the ways students can learn some basic elements of computer programming and even translated into Armenian a recipe that children had created.&nbsp;</p>
<p>“It’s not enough for children to be at a computer and play a game,” said Resnick. “We want children to be inventors and creators.”</p>
<p>In conversation with Resnick, Sargsyan acknowledged that he had never done any programming — although, as a highly knowledgeable chess aficionado, he is used to the concept of thinking about sequences of logical decisions.</p>
<p>“Chess requires the same kind of systematic thinking as programming,” Resnick observed.</p>
“In the 155 years of its existence, this Institute has not only withstood and adapted to, but also become a leader of constant progress and change,” said Armenian President Serzh A. Sargsyan (pictured).Photo: Casey AtkinsSchool of Architecture and Planning, Sloan School of Management, Media Lab, President L. Rafel Reif, Special events and guest speakersReflection-removing camerahttp://news.mit.edu/2016/reflection-removing-camera-0325
Device uses depth sensor and signal processing to capture clear images through windows. Fri, 25 Mar 2016 14:00:00 -0400Larry Hardesty | MIT News Officehttp://news.mit.edu/2016/reflection-removing-camera-0325<p>In recent years, computer scientists have been investigating a range of techniques for removing reflections from digital photographs shot through glass. Some have tried to use variability in focal distance or the polarization of light; others, like those <a href="http://news.mit.edu/2015/algorithm-removes-reflections-photos-0511">at MIT</a>, have exploited the fact that a pane of glass produces not one but two reflections, slightly offset from each other.</p>
<p>At the Institute of Electrical and Electronics Engineers’ International Conference on Acoustics, Speech, and Signal Processing this week, members of the MIT Media Lab’s Camera Culture Group will present a fundamentally different approach to image separation. Their system fires light into a scene and gauges the differences between the arrival times of light reflected by nearby objects — such as panes of glass — and more distant objects.</p>
<p>In <a href="http://news.mit.edu/2012/camera-sees-around-corners-0321">earlier</a> <a href="http://news.mit.edu/2011/trillion-fps-camera-1213">projects</a>, the Camera Culture Group has measured the arrival times of reflected light by using an ultrafast sensor called a streak camera. But the new system uses a cheap, off-the-shelf depth sensor of the type found in video game systems.</p>
<p>At first glance, such commercial devices would appear to be too slow to make the fine discriminations that reflection removal requires. But the MIT researchers get around that limitation with clever signal processing. Consequently, the work could also have implications for noninvasive imaging technologies such as ultrasound and <a href="http://news.mit.edu/2010/terahertz-laser-1216">terahertz imaging</a>.</p>
<p>“You physically cannot make a camera that picks out multiple reflections,” says Ayush Bhandari, a PhD student in the MIT Media Lab and first author on the new paper. “That would mean that you take time slices so fast that [the camera] actually starts to operate at the speed of light, which is technically impossible. So what’s the trick? We use the Fourier transform.”</p>
<p>The <a href="http://news.mit.edu/2009/explained-fourier">Fourier transform</a>, which is ubiquitous in signal processing, is a method for decomposing a signal into its constituent frequencies. If fluctuations in the intensity of the light striking a sensor, or in the voltage of an audio signal, can be represented as an erratic up-and-down squiggle, the Fourier transform redescribes them as the sum of multiple, very regular squiggles, or pure frequencies.</p>
<p><strong>Phased out</strong></p>
<p>Each frequency in a Fourier decomposition is characterized by two properties. One is its amplitude, or how high the crests of its waves are. This describes how much it contributes to the composite signal.</p>
<p>The other property is phase, which describes the offset of the wave’s troughs and crests. Two nearby frequencies may be superimposed, for instance, so that their first crests are aligned; alternatively, they might align so that the first crest of one corresponds with a trough of the other. With multiple frequencies, differences in phase alignment can yield very different composite signals.</p>
<p>If two light signals — one reflected from a nearby object such as a window and one from a more distant object — arrive at a light sensor at slightly different times, their Fourier decompositions will have different phases. So measuring phase provides a de facto method for measuring the signals’ time of arrival.</p>
<p>There’s one problem: A conventional light sensor can’t measure phase. It only measures intensity, or the energy of the light particles striking it. And in other settings, such as terahertz imaging, measuring phase as well as intensity can dramatically increase costs.</p>
<p>So Bhandari and his colleagues — his advisor, Ramesh Raskar, the NEC Career Development Associate Professor of Media Arts and Sciences; Aurélien Bourquard, a postdoc in MIT’s Research Laboratory of Electronics; and Shahram Izadi of Microsoft Research — instead made a few targeted measurements that allowed them to reconstruct phase information.</p>
<p>In collaboration with Microsoft Research, the researchers developed a special camera that emits light only of specific frequencies and gauges the intensity of the reflections. That information, coupled with knowledge of the number of different reflectors positioned between the camera and the scene of interest, enables the researchers’ algorithms to deduce the phase of the returning light and separate out signals from different depths.</p>
<p><strong>Reasonable assumptions</strong></p>
<p>The algorithms adapt a technique from X-ray crystallography known as <a href="https://en.wikipedia.org/wiki/Phase_retrieval">phase retrieval</a>, which earned its inventors the Nobel Prize in chemistry in 1985. “We can also exploit the fact that there should be some continuity in the intensity values in 2-D,” says Bourquard. “If your planes, for instance, are a glass window and a scene behind it, both these planes should exhibit some spatial continuity. Typically, the intensity values will not vary too fast on every separate plane. So essentially, what this phase retrieval does is use some techniques of frequency estimation, coupled with the assumption that local intensity variations within every single plane are moderate relative to the average intensity difference between these planes.”</p>
<p>In theory, the number of light frequencies the camera needs to emit is a function of the number of reflectors. If there is just one pane of glass between the camera and the scene of interest, the technique should require only two frequencies. If there are two panes of glass, the technique should require four frequencies.</p>
<p>But in practice, the light frequencies emitted by the camera are not pure, so additional measurements are required to filter out noise. In their experiments, the researchers swept through 45 frequencies to enable almost perfectly faithful image separation. That takes a full minute of exposure time, but it should be possible to make do with fewer measurements. “The interesting thing is that we have a camera that can sample in time, which was previously not used as machinery to separate imaging phenomena,” Bhandari says.</p>
<p>“What is remarkable about this work is the mixture of advanced mathematical concepts, such as sampling theory and phase retrieval, with real engineering achievements,” says Laurent Daudet, a professor of physics at Paris Diderot University. “I particularly enjoyed the final experiment, where the authors used a modified consumer product — the Microsoft Kinect One camera — to produce the untangled images. For this challenging problem, everyone would think that you’d need expensive, research-grade, bulky lab equipment. This is a very elegant and inspiring line of work.”</p>
Members of the MIT Media Lab’s Camera Culture Group devised a new approach to image separation in photographs. Their system fires light into a scene and gauges the differences between the arrival times of light reflected by nearby objects — such as panes of glass — and more distant objects.Courtesy of the researchersResearch, School of Architecture and Planning, Computer science and technology, Media Lab, PhotographyPlotting the complex path of productshttp://news.mit.edu/2016/startup-sourcemap-supply-chains-0324
Startup’s software maps out convoluted supply chains for companies and consumersWed, 23 Mar 2016 23:59:59 -0400Rob Matheson | MIT News Officehttp://news.mit.edu/2016/startup-sourcemap-supply-chains-0324<p>In March 2011, Leonardo Bonanni ’03, SM ’05, PhD ’12 was preparing to defend his PhD thesis on Sourcemap, software that lets consumers map every connection of a product supply chain on a digital map, when tragedy struck in Japan. Although the deadly earthquake and tsunami occurred half a world away, the events had an unexpected impact on Bonanni and Sourcemap.</p>
<p>In the aftermath, automobile, electronic, chemical, and retail sectors worldwide, which relied on Japanese manufacturers for parts and materials, suffered massive shortages. Few affected companies knew enough about the complex Japanese supply chain to respond to such an immense disruption.</p>
<p>“Companies hadn’t been keeping good mapping records of where their suppliers were, or where their suppliers’ suppliers were, so they asked us to deploy [Sourcemap] inside their supply chains,” Bonanni says. “All of a sudden all these maps — the first to show products all the way from raw materials to consumers, every step of the way — became a critical tool for manufacturing companies.”</p>
<p>Motivated by this incident, Bonanni launched Sourcemap commercially so companies could keep better tabs on their supply chains. Today, dozens of pharmaceutical firms, food producers, and clothing and electronics companies use Sourcemap.</p>
<p>The software, which operates in the cloud, gives companies a visual map of all connected global supplier locations for their products. The software also pulls news feeds on significant political events or environmental disasters to alert companies to potential interruptions.</p>
<p>Sourcemap has also continued its original mission of increasing the transparency of supply chains for consumers. The company offers an open-source platform for anyone to publish supply chain maps. And today Sourcemap announced what Bonanni calls “the most ambitious supply-chain transparency project ever.”</p>
<p>Partnering with Imaflora, a certifier in Brazil, Sourcemap is deploying an online tool, called Origens Brasil, that allows consumers in the country to enter serial numbers of numerous products and see exactly where the products came from. Moreover, Bonanni says, consumers can see if certain products — such as cocoa, palm oil, and paper — come from producers that are contributing to deforestation of the Amazon rainforest.</p>
<p>To do so, the software looks at Global Forest Watch satellite imagery for any noticeable decrease in leaf cover where all ingredients of a product are made. The tool could also be attractive for multinational corporations pledging to support only producers that do not contribute to deforestation, Bonanni says. “This is a totally new level of confidence that you can have when you buy things,” he says.</p>
<p><strong>Seeing the “zigzagging”</strong></p>
<p>Supply chains are surprisingly complex systems of organizations, people, and resources that transform raw materials into finished consumer products. Major supply chains, such as those for mobile phones, can sometimes include hundreds or thousands of direct and indirect suppliers of hardware, components, parts, and software.</p>
<p>In fact, Bonanni says, when most companies see their entire supply chains mapped instantly for the first time, they’re quite shocked. “They are struck by the amount of zigzagging that is going on,” he says.</p>
<p>One of the most complex supply chains Bonanni has seen was the first one he tried mapping himself 10 years ago, as part of a research project through the MIT Media Lab and MIT Center for Civic Media: that of the computer chip. One chip, he says, consists of about 50 different materials —&nbsp;including some rare-earth elements — produced from at least that many countries. He never finished the map.</p>
<p>“The computer chip will travel dozens of times around the Earth, if you add up all the paths of all of its subcomponents, before it’s finally created,” he says. “What’s amazing is you have millions of people to make a computer chip, and billions of people now own computer chips. So if you draw the whole supply chain of semiconductors, it’s a social network that includes almost everyone on the planet.”</p>
<p>Traditionally, companies hire consultants to map their supply chains. This is a labor-intensive process, where consultants track down parts and materials, draw out maps with computer tools, and analyze for risks and opportunities. Sourcemap, however, is automated, fast, and visually appealing, Bonanni says.</p>
<p>Using Sourcemap, companies upload product and supplier data, and revenue from each product line<em>. </em>Surveys are sent to suppliers to provide missing information, such as where their raw materials and subcontractors are located. Supply chains appear as color-coded dots (supplier locations) and lines (shipment paths) connecting across a map. For each supplier, Sourcemap reveals the inventory count and calculates the money the company would lose if that location went offline for any reason, accounting for time to find a new supplier and how long inventory will last. Companies can also modify supply chains to plan for new products, customers, or suppliers.</p>
<p>The software also employs predictive analytics to analyze real-time news feeds about disasters, corruption, local conflicts, or climate change, and alert companies to find alternate routes to avert disaster. It will track customer demographics in different locations to help companies decide on branching out to new markets.</p>
<p>“[Companies] see whether they need to think strategically about shifting the company direction in a big way to avoid running into a wall, whether that means climate change is making the crop that you rely on harder to grow or the demographics of your customer base is shifting,” Bonanni says. “It’s not until you see the entire operations mapped on one screen that you can start to make those decisions.”</p>
<p><strong>Opening up visibility</strong></p>
<p>According to Bonanni, companies are beginning to share their data to make supply chains&nbsp;— for products such as minerals, apparel, or cocoa — more transparent within industry. Under increased public scrutiny, he adds, companies are becoming more socially conscious of their supply chains, as well.</p>
<p>In 2010, when Sourcemap was still a research project, Stonyfield Farm mapped their entire ingredients supply chain for the public. In 2013, Sourcemap partnered with Mars Chocolate on Vision for Change (V4C), an initiative to improve cocoa productivity and open up direct connections with cocoa farmers to improve their livelihoods. Last year, Fairphone, a company that aims to make environmentally sustainable smartphones, became the first electronics company to publish a Sourcemap of its supply chain, which included more than 300 parts from companies from Japan to North America.</p>
<p>Origens Brasil is another step toward Sourcemap’s mission of greater supply chain transparency, Bonanni says. “The goal for us is to enable that type of visibility, enable companies to see beyond their walls,” he says. “For financial gain, but also for social good, to make sure they’re not using up natural or social resources faster than they can be replenished.”</p>
Sourcemap gives companies a visual map of all connected global supplier locations for their products. The software also pulls news feeds on significant political events or environmental disasters to alert companies to potential interruptions. Courtesy of SourcemapSupply chains, Innovation and Entrepreneurship (I&E), Startups, Alumni/ae, Media Lab, Center for Civic Media, Center for Transportation and Logistics, Software, School of Architecture and Planning, SHASS, School of Engineering, Sustainability, Business and managementArchitecture symposium March 30 and 31: Exploring the campus then, now, nexthttp://news.mit.edu/2016/architecture-symposium-exploring-campus-then-now-next-0315
Event welcomes more than 20 speakers over four sessions on designing places for inventing the future.
Tue, 15 Mar 2016 15:40:01 -0400MIT Institute Eventshttp://news.mit.edu/2016/architecture-symposium-exploring-campus-then-now-next-0315<p>College campuses have long played a vital role in our society as educators of future generations, incubators for innovation and economic development, and partners with the communities we serve. As MIT celebrates a century in Cambridge, Massachusetts, leaders in campus design and educational innovation will convene in Kresge Auditorium on March 30 and 31 to share ideas on the past, present, and future of campus architecture and design, as well as MIT’s role as an innovative campus. <a href="https://www.regonline.com/builder/site/Default.aspx?EventID=1804141" target="_blank">Registration</a> is open and there is no fee for MIT faculty, staff, and students; lunch is included on both days. In each session, participants will have opportunities for Q&amp;A with the speakers.</p>
<div class="cms-placeholder-content-video"></div>
<p>Chancellor Cynthia Barnhart will open day 1 of the symposium at 1 p.m. on March 30, followed by a session chaired by School of Architecture and Planning (SA+P) Dean Hashim Sarkis and focusing on campus architecture and design, including the story of MIT’s Main Group and its influence on other campuses during the past 100 years. Topics include campus planning and learning spaces, with insight into the design of two newer campuses: the Singapore University of Technology and Design and the Skolkovo Institute of Science and Technology.</p>
<p>Speakers include:</p>
<ul>
<li>Mark Jarzombek, professor of the history and theory of architecture at MIT;</li>
<li>Hilary Ballon, professor of urban studies and architecture at New York University and deputy vice chancellor at NYU Abu Dhabi;</li>
<li>David Adjaye, principal at Adjaye Associates, London and New York;</li>
<li>Christian Veddeler, director and senior architect at UNStudio; and</li>
<li>Julie Newman, director of the Office of Sustainability and lecturer in the Department of Urban Studies and Planning at MIT.</li>
</ul>
<p>Attention turns next to the innovation districts that are growing in cities worldwide, thanks to partnerships between universities, government, and industry. Former SA+P dean and professor of architecture Adèle Naudé Santos chairs a session comprising reflections on incubating urban innovation spaces such as Kendall Square (Cambridge, Massachusetts) and Roosevelt Island (New York, New York).</p>
<p>Speakers include:</p>
<ul>
<li>Israel Ruiz, executive vice president and treasurer of MIT;</li>
<li>Katie Stebbins, assistant secretary of technology, innovation and entrepreneurship for the Commonwealth of Massachusetts;</li>
<li>Roger Duffy, design partner at Skidmore, Owings and Merrill LLP, New York;</li>
<li>Marion Weiss, Graham Chair Professor of Architecture at the University of Pennsylvania and cofounder of WEISS/MANFREDI Architecture/Landscape/Urbanism, New York; and</li>
<li>Carlo Ratti, professor of the practice and director of the SENSEable City Lab at MIT.</li>
</ul>
<p>Following a welcome to day 2 of the symposium at 9:30 a.m. on March 31 by Provost Martin Schmidt, participants will hear reports from speakers at the front lines of experiments in education at the university, secondary, and childhood levels. Dean for Graduate Education Christine Ortiz moderates a panel discussion with:</p>
<ul>
<li>Thomas Magnanti, president of the Singapore University of Technology and Design and Institute Professor at MIT;</li>
<li>Mitchel Resnick, LEGO Papert Professor of Learning Research at the MIT Media Lab;</li>
<li>Saeed Arida, founder and chief excitement officer of NuVu Studio.</li>
</ul>
<p>The final session looks to the future of online learning, a future to be determined by our comprehension of its challenges and opportunities. From lessons learned to ongoing data-driven educational experiments, how are we thinking differently today — now that we know what we know? Panel discussion includes:</p>
<ul>
<li>Sanjay Sarma, vice president for open learning at MIT;</li>
<li>Susan Singer, division director for the Division of Undergraduate Education at the National Science Foundation and the Laurence McKinley Gould Professor, in biology and cognitive science at Carleton College;</li>
<li>Paul LeBlanc, President, Southern New Hampshire University&nbsp;</li>
<li>Anant Agarwal, CEO, edX; Professor of Electrical Engineering and Computer Science, MIT</li>
</ul>
<p>The Designing Places for Inventing the Future: The Campus Then, Now, Next symposium is free (with <a href="http://www.regonline.com/builder/site/Default.aspx?EventID=1804141" target="_blank">registration</a>) to all MIT faculty, staff, and students, and <a href="http://www.regonline.com/builder/site/Default.aspx?EventID=1804141" target="_blank">open to the public</a>, with one-day ($20) and two-day ($40) passes available.</p>
<p>The symposium is part of <a href="http://mit2016.mit.edu/" target="_blank">MIT2016: Celebrating a Century in Cambridge</a>, a program running Feb. 29 to June 4 as MIT commemorates 100 years at its&nbsp;“new” campus. Registration is also open for the April 12 symposium, <a href="http://mit2016.mit.edu/symposia/frontiers" target="_blank">Beyond 2016: MIT’s Frontiers of the Future</a>.</p>
Photo collage: Tim BlackburnCentury in Cambridge, Architecture, Urban studies and planning, Innovation and Entrepreneurship (I&E), online learning, Department of Electrical Engineering and Computer Science (EECS), School of Engineering, Media Lab, School of Architecture and Planning, Data, Design, Arts, Community, Technology and societyInnovating for billionshttp://news.mit.edu/2016/innovating-billions-emerging-worlds-initiative-0309
Emerging Worlds, a Media Lab and Tata Center initiative, seeks to catalyze change on a massive scale.Wed, 09 Mar 2016 18:15:01 -0500Shriya Parekh | Ben Miller | Tata Center for Technology and Designhttp://news.mit.edu/2016/innovating-billions-emerging-worlds-initiative-0309<p>Every 12 years, Nashik, India is the site of the Kumbh Mela, a religious festival that draws crowds in excess of 20 million to the sacred Godavari River. During this one-month period, Nashik — a mid-sized agricultural center without an airport — temporarily becomes one of the largest cities in the world.</p>
<p>How can a city’s infrastructure cope with this sudden influx of pilgrims, and the demands for food, water, shelter, and safety they bring with them? And how can communities worldwide become more resilient and livable? These kinds of questions drive Associate Professor Ramesh Raskar, a native of Nashik, and his Camera Culture group at the MIT Media Lab.</p>
<p>With support from the <a href="http://tatacenter.mit.edu/" target="_blank">MIT Tata Center for Technology and Design</a>, Raskar’s group is forging a new co-innovation model linking researchers at MIT with energetic students in India to work on problems across key fields such as health care, education, and the environment. They are using Nashik as a proving ground for solutions that can work at massive scale, under the umbrella of an initiative called <a href="http://www.redx.io/emerging-worlds-1/#emerging-worlds" target="_blank">Emerging Worlds</a>.</p>
<p>This January, the Camera Culture group traveled to Nashik to host a week-long innovation camp, where they mentored teams of students who had traveled from all around India to participate.</p>
<p>"The real world of innovation is not in Boston,” Raskar said at the camp’s opening session. “You have to get out in the world, collaborate, and apply research. For innovators, Nashik is a perfect starting point."</p>
<p>No one knows this better than Shantanu Sinha, who attended a camp in 2013, when he was an undergraduate at the Indian Institute of Technology Bombay. Now, he’s a master’s student at MIT and a fellow in the Tata Center. He says that during these camps, they are looking for two things: interesting problems and interesting people.</p>
<p>“We think of these camps as a way to find exciting problem statements and vet solutions in the field,” he says. “It’s also a way to recruit talented people to work with us long-term.”</p>
<p>That’s what makes the Emerging Worlds model different from hackathons, incubators, and accelerators, according to Raskar. It goes far beyond the one-week camps; there are now three permanent innovation centers in India (Hyderabad, Mumbai, and the new <a href="https://www.digitalimpactsquare.com/" target="_blank">DISQ Innovation Centre</a> in Nashik) where teams of young researchers collaborate with their colleagues 8,000 miles away at MIT.</p>
<p>Raskar says the new Nashik center will “provide a readymade pilot site for many MIT and non-MIT projects. This way, researchers don’t spend unnecessary time finding stakeholders and scheduling meetings. It all happens in an integrated ecosystem.”</p>
<p>“The motivation for building these centers, from our point of view, is that we need constant support in India,” says Anshuman Das, a postdoc in the Media Lab and Tata Center. “We can’t just come twice a year and hope to make a difference. Our efforts need to go on all year.”</p>
<p><strong>Collaborating across continents</strong></p>
<p>“Startups, incubators, and entrepreneurship may not be the only venture model for India,” Raskar says. He sees co-innovation between universities, governments, and the private sector as a promising avenue for tackling complex challenges.</p>
<p>“Ramesh says innovation is all about people,” says Tata Fellow Tristan Swedish, a master’s student in the Media Lab. “It’s so great to talk to diverse people and understand what their ideas are.”</p>
<p>One of Camera Culture’s focus areas is affordable, high-impact health technologies. Innovations like these look to fill the gap for underserved communities in India where primary care doctors and specialists are not readily available. These tools hope to aid frontline health care workers and allow people to screen for diseases at early stages.</p>
<p>Sinha and Swedish’s work focuses on early diagnosis of the conditions that cause preventable blindness. Sinha is developing an easy-to-use ocular imaging device to enable out-of-clinic examination of the anterior segment of the eye on a large scale. Swedish is developing a new class of user-centric retinal imaging systems inspired by computational photography and machine learning.</p>
<p>The co-innovation model has made it easier for them to iterate through designs. Working with the LV Prasad Eye Institute in Hyderabad, they are able to confer with doctors on the needs of the patients and design devices based on their input. Pushyami Rachapudi, a master’s student at International Institute of Information Technology Hyderabad, has worked with the team since January 2015, and is instrumental in transitioning ideas into clinic-ready devices that have the potential to be deployed through LV Prasad’s network.</p>
<p>Raskar’s success with the <a href="http://news.mit.edu/2015/eyenetra-mobile-eye-test-prescription-virtual-reality-screens-1019" target="_self">EyeNetra</a>, a mobile eye-test device developed in collaboration with LV Prasad, helped spawn this model.</p>
<p>“Moving from the initial idea to a device is really difficult, so we need someone on the ground who can provide us the right context and design parameters,” says Swedish, noting that the Hyderabad and Cambridge labs are in daily communication.</p>
<p>Rachapudi even did a six-month internship at the Media Lab. “We thought it would be a great opportunity to learn from her,” Sinha says.</p>
<p>Meanwhile, Das and Tata Fellows Mrinal Mohit and Guy Satat are exploring a similar approach to ear, skin, and oral imaging, with help from their collaborators in Mumbai.</p>
<p>“It’s very efficient to do a lot of research at MIT, where we have great facilities,” Mohit says. “Once we nail that down, the collaborations we have in India help us validate the technology.”</p>
<p>This method allows Camera Culture to have a fast-moving, iterative prototyping process, with diverse teams of engineers, software developers, and designers working literally around the clock on opposite sides of the globe.</p>
<p>“I am a maker and I love to build things,” says Akshat Wahi, who works at the center in Mumbai. “MIT Emerging Worlds gave me an opportunity apply my skills in new ways that I hadn’t imagined before.”</p>
<p>Wahi and others like him have forgone the chance to earn higher salaries at big corporations, opting to join Emerging Worlds instead. Das attributes it to their desire to “do something bigger than just a job.”</p>
<p>Sai Sri Sathya, a software engineer, left Microsoft to join the effort in Mumbai. “The impact I could create at Microsoft was much less than what I can do for Emerging Worlds.”</p>
<p>Raskar is hoping that impact will eventually reach billions of people — starting with his hometown.</p>
The Emerging Worlds innovation lab in Mumbai is a home away from home for MIT graduate students (l-r) Alicia Chong, Mrinal Mohit, Tristan Swedish, and Shantanu Sinha.Photo: Shriya ParekhTata Center, Media Lab, India, International development, International initiatives, Global, Technology and society, Startups, Innovation and Entrepreneurship (I&E), Health careWristband detects and alerts for seizures, monitors stress http://news.mit.edu/2016/empatica-wristband-detects-alerts-seizures-monitors-stress-0309
Wearable tracks increased skin conductance that signals stress, helps identify dangerous seizures.Wed, 09 Mar 2016 12:30:00 -0500Rob Matheson | MIT News Officehttp://news.mit.edu/2016/empatica-wristband-detects-alerts-seizures-monitors-stress-0309<p>People with epilepsy suffer from recurrent, unprovoked seizures that can cause injury and even death from “sudden unexpected death in epilepsy” (SUDEP), a condition that occurs minutes after a seizure ends.</p>
<p>Now Empatica, co-founded by MIT professor and wearables pioneer Rosalind Picard, has developed a medical-quality consumer wristband, called Embrace, that monitors stress signals to detect potentially deadly seizures and alert wearers and caregivers, so they can intervene.</p>
<p>Researchers worldwide are using a scientific version of the wristband, called the E4, which also measures other signals, to study epilepsy and other neurological and psychiatric conditions. Numerous academic papers are now published, showing that the combined electrodermal activity (EDA), also known as skin conductance, and motion data collected from the wrist improve the accuracy of seizure detection over using only motion data.</p>
<p>Now Empatica is prepping to release Embrace, “a consumer-looking, but medical-quality device” for monitoring stress and seizures, says Picard, a professor of media arts and sciences in the MIT Media Lab and Empatica’s chief scientist. After a successful Indiegogo campaign last year, the beta version of Embrace shipped to backers last Friday.</p>
<p>Apart from detecting seizures, anyone can also use the wristbands to monitor stress levels — which is important for good health, Picard says. Chronic stress has been linked to numerous health issues such as heart disease, obesity, and diabetes. “Stress signals reach every organ of your body, so these stress signals are potentially influencing everything,” Picard says. “Sometimes you don’t realize [you’re stressed] until you get that just-in-time notice.”</p>
<p><strong>Better stress detection for all</strong></p>
<p>According to the World Health Organization, roughly 50 million people worldwide suffer from epilepsy. The Centers for Disease Control and Prevention estimates about one in every 1,000 people with epilepsy die annually from SUDEP, a possible result of suffocation from impaired breathing, fluid in the lungs, or seizing while sleeping face down. Rates are significantly higher for people who have had at least one grand mal seizure — one of the most dangerous types of seizures — in the past year, Picard says.</p>
<p>With Embrace, Empatica aims to aid people suffering from epilepsy by helping them better alert loved ones, Picard says. An app that comes with Embrace lets wearers and others monitor when the person might be having a grand mal seizure.</p>
<p>The wristbands resemble watches but have a solid silver or black face. Sensors underneath the face track pulse, body motion, temperature, and EDA, which involves subtle electrical changes across the skin. Boosts in EDA, without accompanying changes in motion, can signal stress. In people with epilepsy, a sharp rise in both signals could indicate a severe, potentially life-threatening seizure.</p>
<p>When the wristband detects a seizure, it vibrates, and the wearer can respond. If the wearer becomes unconscious, which happens with the most dangerous seizures, and doesn’t respond quickly, the app sends an alert to a designated individual.</p>
<p>“If somebody goes to check on a person during or after they have had a grand mal seizure, then they are less likely to die,” Picard says. “In some cases, simply saying the person’s name or turning them over (gentle stimulation) might save their life. Anybody could do this potentially life-saving action, they just need to know to go check on the person — don’t leave them alone right after a seizure.”</p>
<p>Additionally, teachers and parents may want to monitor the stress levels of a child with emotion regulation issues or autism. The device may determine, for instance, if a child is experiencing a “fight-or-flight response,” and can be set to vibrate to alert parents or teachers. “You can see if the child lying on the floor or on the ground in the playground might be about to have a meltdown … even though they may look calm outwardly,” she says. “Several teens with autism told us they often can’t tell they’re about to explode until it’s too late. Maybe this could help some of them get an alert while they’re still in control.”</p>
<p>For epilepsy researchers, Picard says, the E4 wristband has made it possible to gather real-time data from patients going about their daily lives. <a href="https://www.empatica.com/">Empatica’s website</a> now lists around 20 academic papers that use E4 in studies on subjects ranging from autism to resuscitation after a heart attack.</p>
<p>In 2012, Picard and researchers published a paper in <em>Neurology</em> that correlated greater responses on the wrist with longer suppression of brain waves on the scalp. This meant certain regions of the brain were experiencing hyperactivity while the cortex, which is near the scalp, was shutting down — a phenomenon observed in all SUDEP cases. (This has now become an important biomarker for SUDEP.) Other results have identified a critical window when someone may stop breathing after having a seizure.</p>
<p>Importantly for research, Picard says, the device specifically detects stress signals from the sympathetic response of the autonomic nervous system, which is commonly associated with the fight-or-flight responses indicative of stress and seizures. “When we measure the skin response, we are seeing signals that originate deep in the brain, from regions so far under the scalp that a traditional EEG cannot pick them up,” Picard says.</p>
<p>In that way, the E4 is also valuable in studying other neurological conditions such as autism, anxiety, depression, phobias, and post-traumatic stress disorder (PTSD), Picard adds. “A PTSD researcher, for instance, may use the device to more accurately study why and how a patient may experience heightened flight-or-fight responses,” she says.</p>
<p><strong>Staying in the medical space</strong></p>
<p>Empatica’s core technology traces back to 2007, when Picard and researchers developed iCalm, a similar EDA-measuring wristband. In 2009, Picard and former postdoc Rana el Kaliouby co-founded Affectiva to commercialize the wearable device, then called the Q Sensor, to be used for measuring stress associated with autism.</p>
<p>Then one day when a student borrowed two wristbands to monitor the stress levels of his little brother with autism. He put one wristband on each wrist. When Picard checked the data remotely from her computer, she noticed “a whopper of a response on one side and nothing on the other,” she says. “It was such a big response, I didn’t believe it was real.”</p>
<p>Nothing she did in her lab could reproduce such a response. However, the student had kept a diary and, sure enough, on the exact date and time of the “whopper” response, the brother had had a seizure. As it turns out, minutes before someone has a seizure, the hair on one arm may stand on end.</p>
<p>But Affectiva soon changed course and began developing software that monitored and quantified people’s emotions for market research. So Picard founded another startup, Physiio, “to help the technology grow in the medical space,” Picard says. This caught the attention of a small, Italian, stress-tracking-wearable startup co-founded by Matteo Lai and Simone Tognetti. In 2014, the two companies merged to form Empatica, Inc., with Lai as CEO and Tognetti as chief technology officer.</p>
<p>Since then, Empatica has produced several iterations of the E4 for clinical use, with the most recent version released last year. But now the startup is “laser-focused” on bringing the Embrace to consumers, Picard says.</p>
<p>So does Picard — the consumer — use her own device? Yes, and she says the wristband has revealed interesting things about her own life. “The first time I wore this, I’m driving home and it’s going off, and I think, ‘I guess I’m letting myself get a little bent out of shape here,’” Picard says. “I’ve found it fabulous to learn about what’s going on with myself.”</p>
Empatica's wristband, called Embrace (pictured here), is “a consumer-looking, but medical-quality device” for monitoring stress and seizures, Rosalind Picard says. Courtesy of EmpaticaInnovation and Entrepreneurship (I&E), Startups, Media Lab, School of Architecture and Planning, Wearable sensors, Health care, Big data, Apps, Medical devicesMIT Media Lab and MIT Press launch Journal of Design and Sciencehttp://news.mit.edu/2016/mit-launch-journal-of-design-and-science-0224
Journal will offer a new, open-access alternative for academic publishing.Wed, 24 Feb 2016 17:22:56 -0500MIT Media Labhttp://news.mit.edu/2016/mit-launch-journal-of-design-and-science-0224<p>The MIT Media Lab and the MIT Press have jointly launched the <a href="http://jods.mitpress.mit.edu"><em>Journal of Design and Science</em></a> <em>(JoDS),</em> an online, open-access journal whose aim is to capture the antidisciplinary ethos of the MIT Media Lab while opening new connections between science and design.</p>
<p>“We are joining forces with the MIT Press to create a new model for academic publishing,” says Media Lab Director Joi Ito, who will lead the journal’s curatorial team.</p>
<p>Unlike journals that operate within a formal peer-review system, <em>JoDS</em> invites lively discussions across all fields of design and science,&nbsp;unconventional formats, and widespread participation, encouraging authors to engage in ongoing discussion with members of many different communities. The goal is to provide for a much broader array of perspectives, new pathways forward, and emergent topics for further research.</p>
<p>The first volume includes articles by Joi Ito, Media Lab professors Neri Oxman and Kevin Slavin, and inventor and scientist Danny Hillis. It is dedicated to Marvin Minsky, who <a href="http://news.mit.edu/2016/marvin-minsky-obituary-0125" target="_self">passed away last month</a>. “It is only appropriate that this publication be dedicated to Marvin,” says Ito. “He was a brilliant thinker and founding member of the Media Lab, who pushed disciplinary boundaries and constantly challenged the status quo.”</p>
<p>JoDS’s format is made possible by the new <a href="http://pubpub.media.mit.edu/pub/hello">PubPub</a> publishing platform, created by researchers Travis Rich and Thariq Shihipar in the Media Lab’s Viral Communications group, working with Andrew Lippman, who heads the group. PubPub provides rich commenting features and powerful, intuitive authoring tools. <em>JoDS</em> articles are authored directly within PubPub, which also enables easy integration of multimedia, images, and large data sets.</p>
<p>“This collaboration is a natural extension of the alignment between the Media Lab and the press,” says MIT Press Director Amy Brand. “It represents a new vision for the future of scholarly communication, as well as the importance of the conversation between scientists and designers."</p>
<p>In addition to online journal publication, future applications for the PubPub platform could include a new model for thesis submission, or a rich platform for ebook publishing.</p>
The MIT Media Lab and MIT Press have launched the Journal of Design and ScienceMedia Lab, MIT Press, Open access, Design, School of Architecture and Planning, Technology and societyA living, breathing textile aims to enhance athletic performancehttp://news.mit.edu/2016/living-breathing-textile-aims-to-enchance-athletic-performance-0216
Researchers in the Media Lab&#039;s bioLogic group have created a new form of performance fabric that combines biomaterials research with textile design.Tue, 16 Feb 2016 17:07:01 -0500Sharon Lacey | Arts at MIThttp://news.mit.edu/2016/living-breathing-textile-aims-to-enchance-athletic-performance-0216<p>Textile production has historically been a bellwether for innovations in manufacturing — from technological improvements such as the spinney jenny and the flying shuttle at the dawn of the Industrial Revolution to recent developments in electronic and reactive textiles by designers such as <a href="http://www.berzowska.com/" target="_blank">Joanna Berzowska</a> MS '99, who are transforming fabrics into wearable computers. Now,&nbsp;<a href="http://tangible.media.mit.edu/project/biologic/" target="_blank">bioLogic</a>, a research team in the <a href="http://tangible.media.mit.edu/" target="_blank">Tangible Media Group</a> within the MIT Media Lab, has created a completely new form of performance fabric that combines biomaterials research with textile design. BioLogic is growing living actuators and synthesizing responsive bio-skin in the era where, they declare, “bio is the new interface.” They say, “we are imagining a world where actuators and sensors can be grown rather than manufactured, being derived from nature as opposed to engineered in factories.”</p>
<p>Under the direction of&nbsp;<a href="http://web.media.mit.edu/~ishii/" target="_blank">Professor Hiroshi Ishii</a>, the bioLogic team has unearthed a new behavior of the ancient bacteria&nbsp;<em>Bacillus subtilis natto</em>: the expansion and contraction of the natto cells relative to atmospheric moisture. The team is capitalizing on this natural phenomenon by embedding the bacteria into fabric to ventilate garments. They harvest the animate natto cells in a bio lab and assemble them with a micron-resolution bio-printing system, transforming them into responsive fashion, a “second skin.” The synthetic bio-skin reacts to body heat and sweat, causing flaps around heat zones to open, enabling sweat to evaporate and cool down the body through an organic material flux.</p>
<p>Together with New Balance, bioLogic is applying this technology to creating sportswear that regulates athletes’ body temperatures, thereby enhancing performance. Lining Yao, who is responsible for concept creation, interaction design, and fabrication for bioLogic, explains, “We are trying to explore how the physical materials and physical environment can be smarter, more adaptive, and become part of us. This garment will understand when you sweat, and it will sense and open up to release your sweat, and close up to keep you warm again. A garment can become an interface that can communicate with your body. The reason we started to explore this bacteria is that we knew that in the natural world there are a lot of smart materials that are naturally responsive. It’s very sensitive to even tiny changes in the skin condition, so we thought an on-skin transformable textile would be a really interesting application.”</p>
<p>Beyond the industrial collaboration, a grant from the&nbsp;<a href="http://arts.mit.edu/opportunities/grants/" target="_blank">MIT Council for the Arts</a>&nbsp;enabled bioLogic to invite fashion- and product designers from the Royal College of Art,&nbsp;<a href="http://showtime.arts.ac.uk/OksanaAnilionyte" target="_blank">Oksana Anilionyte</a>&nbsp;and&nbsp;<a href="http://www.helenesteiner.com/" target="_blank">Helene Steiner</a>, to bring the project to a new artistic level. Yao explains that bioLogic chose to focus their efforts on the more cutting-edge technological, artistic, or conceptual ideas, and hope some of the pragmatic concerns — like washing and caring for garments made from the “bio-skin” — will be addressed by the wider design community who produce and use the fabric. The project has already piqued the interest of several fashion designers from Central Saint Martins and Parsons, who see a number of potential uses, including creating a garment for Korean women who fish and using this natural nanoactuator to explore other forms of clothing. &nbsp;&nbsp;</p>
<p>While this project appeals to fashion designers and those creating athletic attire, Ishii points out that the Tangible Media Group focuses on diverse actuated materials: “This is one specific instance. We are not really dedicated to fashion design or dance performance wear, but for this project we did specific experiments applying to those areas. We are devoted to the much more fundamental concept of&nbsp;‘<a href="http://tangible.media.mit.edu/project/radical-atoms/" target="_blank">radical atoms</a>.’&nbsp;Basically, we are interested in materials that artists and designers would use to express their ideas. For example, a product designer may use metal or glass or plastic. Computer designers may use a pixel in the computer screen, but that’s intangible. Physical materials are nice, but frozen; they’re dead. So we are interested in making materials that transform dynamically. That’s what we call ‘radical atoms.’”</p>
<p>Yao says this project aligned perfectly with the group’s vision of “human interaction with future dynamic materials.” She adds that “the general idea is not only how you can be inspired by nature, but how you can collaborate with nature.”</p>
<p>The Tangible Media Group is leading the project in collaboration with MIT Department of Chemical Engineering, the Royal College of Art, and New Balance. Team members come from diverse backgrounds including design, art, science, and engineering. They are:</p>
<p><a href="http://web.media.mit.edu/~liningy/" target="_blank">Lining Yao</a>, concept creation, interaction design and fabrication, MIT Media Lab;</p>
<p><a href="http://www.wenwang-biofabrication.com/" target="_blank">Wen Wang</a>, biotechnology and material science, MIT Deptartment of Chemical Engineering;</p>
<p><a href="http://www.guanyundesign.com/" target="_blank">Guanyun Wang</a>, industrial design and fabrication, MIT Media Lab and Zhejiang University;</p>
<p><a href="http://www.helenesteiner.com/" target="_blank">Helene Steiner</a>, interaction design, MIT Media Lab and Royal College of Art;</p>
<p>Chin-Yi Cheng, computational design and simulation, MIT Department of Architecture;</p>
<p><a href="http://ou-jifei.com/" target="_blank">Jifei Ou</a>, concept design and fabrication, MIT Media Lab;</p>
<p><a href="http://showtime.arts.ac.uk/OksanaAnilionyte" target="_blank">Oksana Anilionyte</a>, fashion design, MIT Media Lab and Royal College of Art; and</p>
<p><a href="http://tangible.media.mit.edu/person/hiroshi-ishii/" target="_blank">Hiroshi Ishii</a>, direction, Tangible Media Group, MIT Media Lab.</p>
bioLogic Second Skin from the MIT Media Lab's Tangible Media GroupPhoto: Tangible Media Group/MIT Media LabArts, Design, Bacteria, Media Lab, Athletics, Chemical engineering, BiomaterialsMIT Professional Education launches its first program in Taiwanhttp://news.mit.edu/2016/mit-professional-education-launches-first-taiwan-program-0216
Course led by two from the Media Lab focused on improving the livability of cities while dramatically reducing resource consumption.Tue, 16 Feb 2016 16:21:01 -0500MIT Professional Educationhttp://news.mit.edu/2016/mit-professional-education-launches-first-taiwan-program-0216<p>MIT Professional Education is expanding its global footprint through the launch of locally relevant professional short courses, most recently with “<a href="http://web.mit.edu/professional/short-programs/courses/beyond_smart_cities.html" target="_blank">Beyond Smart Cities</a><strong>”</strong> in Taiwan. The course was held last month on the coattails of an urban mobility project announcement in Taiwan involving an MIT Media Lab research team headed by Principal Research Scientist Kent Larson. A day prior to the course, Taiwan Premier Mao Chi-kuo PhD '82, along with Professor Larson, inaugurated the autonomous tricycle — coined Persuasive Electric Vehicle (PEV) — testing project in Taipei, the country’s capital.</p>
<p>“We were lucky to have had the opportunity to launch our very first professional education program in Taiwan in conjunction with such a high profile MIT research project,” said Bhaskar Pant, executive director of MIT Professional Education. “Our course, already very popular at MIT and featuring the same faculty involved with the PEV project, addressed not only innovations in transportation but other aspects of living more intelligently in the cities of the future. A very important element in the course also was to include local guest lecturers who brought to light local challenges confronting Taiwan specifically.” &nbsp;&nbsp;</p>
<p>Led by Kent Larson, director of the MIT Media Lab’s Changing Places group, and Ryan C.C. Chin, managing director of the City Science Initiative at the MIT Media Lab, this course focused on improving the livability of cities while dramatically reducing resource consumption.</p>
<p>“The <a href="http://web.mit.edu/professional/short-programs/courses/beyond_smart_cities.html">Beyond Smart Cities</a> course explored innovative, new systems for architecture, energy, and food,” Larson explained. “We were honored to have several notable educators and business professionals from Taiwan join us. Professor Daphne Yuan from National Chengchi University spoke about creating a digital ecosystem for the elderly population; Chester Ho, chairman of Taifong Partners, discussed entrepreneurship opportunities and mindset in Taiwan; and Lee Bo Tin from Fab Dynamics described the state of urban manufacturing in the area.”&nbsp;</p>
<p>In the spirit of MIT’s motto, “mens et manus” or “mind and hand,” participants in the course not only had the chance to hear faculty speak about the latest trends on the topic, they also had the opportunity to actively work in cross-generational groups to tackle real problems facing urban cities.”&nbsp; This was an experience that’s unheard of in Taiwan. “The hands-on group exercises were excellent,” a class participant noted. “Groups discussed issues they cared about passionately. MIT should definitely continue its tradition of the 'mind-and-hand' learning model here in Taiwan.”</p>
<p>Over the course of the next several years, MIT Professional Education plans to continue its work in Taiwan, as well as expand into new regions, including South Africa, Singapore, and South America.</p>
<p>“MIT has spawned thousands of new innovations and ideas over the years,” Pant says. “Through our International Programs, we hope to share that spirit of invention and knowledge with professionals around the world.”</p>
<p>Individuals interested in taking Beyond Smart Cities on MIT’s campus can <a href="http://web.mit.edu/professional/short-programs/courses/beyond_smart_cities.html" target="_blank">apply on the MIT Professional Education website</a>.</p>
Photo: Lily FuMIT Professional Education, Classes and programs, International initiatives, Asia, Media Lab, School of Architecture and PlanningImaging with an “optical brush”http://news.mit.edu/2016/imaging-optical-fibers-brush-0212
New imaging system uses an open-ended bundle of optical fibers — no lenses, protective housing needed.Fri, 12 Feb 2016 05:00:00 -0500Larry Hardesty | MIT News Officehttp://news.mit.edu/2016/imaging-optical-fibers-brush-0212<p>Researchers at the MIT Media Lab have developed a new imaging device that consists of a loose bundle of optical fibers, with no need for lenses or a protective housing.</p>
<p>The fibers are connected to an array of photosensors at one end; the other ends can be left to wave free, so they could pass individually through micrometer-scale gaps in a porous membrane, to image whatever is on the other side.</p>
<p>Bundles of the fibers could be fed through pipes and immersed in fluids, to image oil fields, aquifers, or plumbing, without risking damage to watertight housings. And tight bundles of the fibers could yield endoscopes with narrower diameters, since they would require no additional electronics.</p>
<p>The positions of the fibers’ free ends don’t need to correspond to the positions of the photodetectors in the array. By measuring the differing times at which short bursts of light reach the photodetectors — a technique known as “time of flight” — the device can determine the fibers’ relative locations.</p>
<p>In a commercial version of the device, the calibrating bursts of light would be delivered by the fibers themselves, but in experiments with their prototype system, the researchers used external lasers.</p>
<p>“Time of flight, which is a technique that is <a href="http://news.mit.edu/2015/algorithms-boost-3-d-imaging-resolution-1000-times-1201">broadly used</a> in our group, has never been used to do such things,” says Barmak Heshmat, a postdoc in the Camera Culture group at the Media Lab, who led the new work. “Previous works have used time of flight to extract depth information. But in this work, I was proposing to use time of flight to enable a new interface for imaging.”</p>
<p>The researchers reported their results today in <em>Nature Scientific Reports</em>. Heshmat is first author on the paper, and he’s joined by associate professor of media arts and sciences Ramesh Raskar, who leads the Media Lab’s Camera Culture group, and by Ik Hyun Lee, a fellow postdoc.</p>
<p><strong>Travel time</strong></p>
<p>In their experiments, the researchers used a bundle of 1,100 fibers that were waving free at one end and positioned opposite a screen on which symbols were projected. The other end of the bundle was attached to a beam splitter, which was in turn connected to both an ordinary camera and a high-speed camera that can distinguish optical pulses’ times of arrival.</p>
<p>Perpendicular to the tips of the fibers at the bundle’s loose end, and to each other, were two ultrafast lasers. The lasers fired short bursts of light, and the high-speed camera recorded their time of arrival along each fiber.</p>
<p>Because the bursts of light came from two different directions, software could use the differences in arrival time to produce a two-dimensional map of the positions of the fibers’ tips. It then used that information to unscramble the jumbled image captured by the conventional camera.</p>
<p>The resolution of the system is limited by the number of fibers; the 1,100-fiber prototype produces an image that’s roughly 33 by 33 pixels. Because there’s also some ambiguity in the image reconstruction process, the images produced in the researchers’ experiments were fairly blurry.</p>
<p>But the prototype sensor also used off-the-shelf optical fibers that were 300 micrometers in diameter. Fibers just a few micrometers in diameter have been commercially manufactured, so for industrial applications, the resolution could increase markedly without increasing the bundle size.</p>
<p>In a commercial application, of course, the system wouldn’t have the luxury of two perpendicular lasers positioned at the fibers’ tips. Instead, bursts of light would be sent along individual fibers, and the system would gauge the time they took to reflect back. Many more pulses would be required to form an accurate picture of the fibers’ positions, but then, the pulses are so short that the calibration would still take just a fraction of a second.</p>
<p>“Two is the minimum number of pulses you could use,” Heshmat says. “That was just proof of concept.”</p>
<p><strong>Checking references</strong></p>
<p>For medical applications, where the diameter of the bundle — and thus the number of fibers — needs to be low, the quality of the image could be improved through the use of so-called interferometric methods.</p>
<p>With such methods, an outgoing light signal is split in two, and half of it — the reference beam — is kept locally, while the other half — the sample beam — bounces off objects in the scene and returns. The two signals are then recombined, and the way in which they interfere with each other yields very detailed information about the sample beam’s trajectory. The researchers didn’t use this technique in their experiments, but they did perform a theoretical analysis showing that it should enable more accurate scene reconstructions.</p>
<p>“It is definitely interesting and very innovative to combine the knowledge we now have of time-of-flight measurements and computational imaging,” says Mona Jarrahi, an associate professor of electrical engineering at the University of California at Los Angeles. “And as the authors mention, they’re targeting the right problem, in the sense that a lot of applications for imaging have constraints in terms of environmental conditions or space.”</p>
<p>Relying on laser light piped down the fibers themselves “is harder than what they have shown in this experiment,” she cautions. “But the physical information is there. With the right arrangement, one can get it.”</p>
<p>“The primary advantage of this technology is that the end of the optical brush can change its form dynamically and flexibly,” adds Keisuke Goda, a professor of chemistry at the University of Tokyo. “I believe it can be useful for endoscopy of the small intestine, which is highly complex in structure.”</p>
The fibers of a new “optical brush” are connected to an array of photosensors at one end and left to wave free at the other.Image: Barmak HeshmatResearch, Optics, Imaging, Media Lab, School of Architecture and PlanningFive with MIT ties tapped for Inventors Hall of Famehttp://news.mit.edu/2016/five-from-mit-inducted-inventors-hall-fame-0205
MIT professor and four alumni honored for inventing electronic ink, the spanning tree protocol, and Sketchpad, a human-machine graphical communication system.Fri, 05 Feb 2016 18:00:01 -0500Nancy DuVergne Smith | MIT Alumni Associationhttp://news.mit.edu/2016/five-from-mit-inducted-inventors-hall-fame-0205<p>Nearly one-third of the 2016 National Inventors Hall of Fame inductees hail from MIT. On May 5, the National Inventors Hall of Fame (NIHF), in partnership with the United States Patent and Trademark Office, will recognize 16 individuals, described as having a revolutionary impact on the nation, at a ceremony in Washington.</p>
<p>One MIT professor worked with two other MIT alumni to create electronic ink. Two addtional alumni honorees worked on projects involving Internet advances, including the spanning tree protocol (STP) and the computer graphics breakthough Sketchpad.</p>
<p><strong>Electronic Ink</strong></p>
<p><a href="http://invent.org/inductees/albert-jd/" target="_blank">Jonathan (JD) Albert</a> '97, a mechanical engineering major, and <a href="http://invent.org/inductees/comiskey-barrett/" target="_blank">Barrett Comiskey</a> '97, a mathematics major, worked with MIT Associate Professor <a href="http://invent.org/inductees/jacobson-joseph/" target="_blank">Joseph Jacobson</a> PhD '93 to develop a changeable display for as many books as could be stored in a device’s memory. Albert, Comiskey, and Jacobson combined their skills from different disciplines, and in January 1997 they completed a working prototype of electronic ink, the technology cornerstone of the e-reader and e-book industry.</p>
<p>Today, Jacobson is head of the MIT Media Lab’s Molecular Machines research group. Commiskey has relocated to Asia to work on bridging the digital divide for billions of people in developing countries. And Albert teaches product design, engineering, and entrepreneurship at the University of Pennsylvania and is the director of engineering for Bresslergroup, a product design and development firm.</p>
<p><strong>Internet advances: STP, reliable and scalable routing</strong></p>
<p><a href="http://invent.org/inductees/perlman-radia/" target="_blank">Radia Perlman</a> '73, SM '76, PhD '88 has played a key role in driving the growth and development of the Internet. Her best-known contribution came in 1985: the spanning tree protocol (STP), which transformed Ethernet from a technology limited to a few hundred nodes confined in a single building, into a technology that can create large networks. Perlman, who received her BS and MS degrees from the MIT Department of Mathematics and her PhD From the Department of Electrical Engineering and Computer Science (EECS) holds over 100 patents and has received many awards, including induction into the National Academy of Engineering, the Internet Hall of Fame, and lifetime achievement awards from the Association for Computing Machinery’s SIGCOMM and Usenix.</p>
<p>Today, Perlman — author of "Interconnections," a widely read text on network routing and bridging, and coauthor of "Network Security" — is a fellow at EMC Corporation.</p>
<p><strong>Sketchpad: A human-machine graphical communication system</strong></p>
<p><a href="http://invent.org/inductees/sutherland-ivan/" target="_blank">Ivan Sutherland</a> PhD '63, an EECS graduate who is considered the father of computer graphics, invented Sketchpad, a human-machine graphical communication system that broke new ground in 3-D computer modeling,&nbsp;visual simulation, and human-computer interaction. Sutherland’s invention enabled users to design and draw in real time directly on the computer display, using a light pen.</p>
<p>Today, Sutherland leads research in asynchronous systems — computers with no global clock — at Portland State University’s Asynchronous Research Center, which he founded in 2008 with Marly Roncken.</p>
Five of the 16 individuals in the 2016 class of National Inventors Hall of Fame inductees hail from MIT. They include Associate Professor Joseph Jacobson (top left), Jonathan (JD) Albert '97 (top right), Radia Perlman '73, SM '76, PhD '88 (center), Barrett Comiskey '97 (bottom left), and Ivan Sutherland PhD '63 (bottom right).Images courtesy of the National Inventors Hall of Fame.Invention, inventions and innovations, Awards, honors and fellowships, Faculty, Alumni/ae, Mathematics, Electrical Engineering & Computer Science (eecs), Mechanical engineering, Innovation and Entrepreneurship (I&E), Media Lab, School of Engineering, School of Science, School of Architecture and PlanningEdward Boyden wins BBVA Foundation Frontiers of Knowledge Awardhttp://news.mit.edu/2016/edward-boyden-wins-bbva-foundation-frontiers-knowledge-award-0127
Three neuroscientists share biomedicine prize for development of optogeneticsWed, 27 Jan 2016 18:09:01 -0500Julie Pryor | McGovern Institute for Brain Researchhttp://news.mit.edu/2016/edward-boyden-wins-bbva-foundation-frontiers-knowledge-award-0127<p>Edward S. Boyden, a professor of media arts and sciences, biological engineering, and brain and cognitive sciences at MIT, has won the <a href="http://www.fbbva.es/TLFU/tlfu/ing/microsites/premios/fronteras/index.jsp" target="_blank">BBVA Foundation Frontiers of Knowledge Award in Biomedicine</a> for his role in the development of <a href="http://mcgovern.mit.edu/news/videos/optogenetics-a-light-switch-for-neurons/" target="_blank">optogenetics</a>, a technique for controlling brain activity with light. Gero Miesenböck of the Oxford University and Karl Deisseroth of Stanford University were also honored with the prize for their role in developing and refining the technique.<br />
&nbsp;<br />
The BBVA Foundation Frontiers of Knowledge Awards are given annually for “outstanding contributions and radical advances in a broad range of scientific, technological, and artistic areas.” The 400,000-Euro prize in the category of biomedicine will be shared among the three neuroscientists.<br />
&nbsp;<br />
“If we imagine the brain as a computer, optogenetics is a keyboard that allows us to send extremely precise commands,” says Boyden, a faculty member at the MIT Media Lab with a joint appointment at MIT's McGovern Institute for Brain Research. "It is a tool whereby we can control the brain with exquisite precision.”<br />
&nbsp;<br />
Boyden joins an illustrious list of prize laureates including physicist Stephen Hawking and artificial intelligence pioneer <a href="http://news.mit.edu/2016/marvin-minsky-obituary-0125" target="_self">Marvin Minsky</a> of MIT, who died on Jan. 24.<br />
&nbsp;<br />
The BBVA Foundation will host the winners at an awards ceremony on June 21 at the foundation’s headquarters in Madrid, Spain.<br />
&nbsp;<br />
The BBVA Foundation promotes, funds, and disseminates world-class scientific research and artistic creation, in the conviction that science, culture and knowledge hold the key to better opportunities for all world citizens. The Foundation designs and implements its programs in partnership with some of the leading scientific and cultural organizations in Spain and abroad, striving to identify and prioritize those projects with the power to significantly advance the frontiers of the known world.<br />
&nbsp;<br />
The juries in each of eight categories are made up of leading international experts in their respective fields, who arrive at their decisions in a wholly independent manner, applying internationally recognized metrics of excellence. The BBVA Foundation is aided in the organization of the awards by the Spanish National Research Council (CSIC).</p>
Edward BoydenPhoto: Bryce VickmarkFaculty, Awards, honors and fellowships, Optogenetics, McGovern Institute, Media Lab, Biological engineering, Brain and cognitive sciences, School of Architecture and Planning, School of Engineering, School of ScienceMarvin Minsky, “father of artificial intelligence,” dies at 88http://news.mit.edu/2016/marvin-minsky-obituary-0125
Professor emeritus was a co-founder of CSAIL and a founding member of the Media Lab.Mon, 25 Jan 2016 20:24:22 -0500MIT Media Labhttp://news.mit.edu/2016/marvin-minsky-obituary-0125<p>Marvin Minsky, a mathematician, computer scientist, and pioneer in the field of artificial intelligence, died at Boston’s Brigham and Women’s Hospital on Sunday, Jan. 24, of a cerebral hemorrhage. He was 88.</p>
<p>Minsky, a professor emeritus at the MIT Media Lab, was a pioneering thinker and the foremost expert on the theory of artificial intelligence. His 1985 book “The Society of Mind” is considered a seminal exploration of intellectual structure and function, advancing understanding of the diversity of mechanisms interacting in intelligence and thought. Minsky’s last book, “The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind,” was published in 2006.</p>
<p>Minsky viewed the brain as a machine whose functioning can be studied and replicated in a computer — which would teach us, in turn, to better understand the human brain and higher-level mental functions: How might we endow machines with common sense — the knowledge humans acquire every day through experience? How, for example, do we teach a sophisticated computer that to drag an object on a string, you need to pull, not push — a concept easily mastered by a two-year-old child?</p>
<p>"Very few people produce seminal work in more than one field; Marvin Minksy was that caliber of genius," MIT President L. Rafael Reif says. "Subtract his contributions from MIT alone and the intellectual landscape would be unrecognizable: without CSAIL, without the Media Lab, without the study of artificial intelligence and without generations of his extraordinarily creative students and protégés. His curiosity was ravenous. His creativity was beyond measuring. We can only be grateful that he made his intellectual home at MIT.”</p>
<p>A native New Yorker, Minsky was born on Aug. 9, 1927, and entered Harvard University after returning from service in the U.S. Navy during World War II. After graduating from Harvard with honors in 1950, he attended Princeton University, receiving his PhD in mathematics in 1954. In 1951, his first year at Princeton, he built the first neural network simulator.</p>
<p>Minsky joined the faculty of MIT’s Department of Electrical Engineering and Computer Science in 1958, and co-founded the Artificial Intelligence Laboratory (now the <a href="http://csail.mit.edu">Computer Science and Artificial Intelligence Laboratory</a>) the following year. At the AI Lab, he aimed to explore how to endow machines with human-like perception and intelligence. He created robotic hands that can manipulate objects, developed new programming frameworks, and wrote extensively about philosophical issues in artificial intelligence.</p>
<p>“Marvin Minsky helped create the vision of artificial intelligence as we know it today,” says CSAIL Director Daniela Rus, the Andrew and Erna Viterbi Professor in MIT’s Department of Electrical Engineering and Computer Science. “The challenges he defined are still driving our quest for intelligent machines and inspiring researchers to push the boundaries in computer science.”</p>
<p>Minsky was convinced that humans will one day develop machines that rival our own intelligence. But frustrated by a shortage of both researchers and funding in recent years, he cautioned, “How long this takes will depend on how many people we have working on the right problems.”</p>
<p>In 1985, Minsky became a founding member of the MIT Media Lab, where he was named the Toshiba Professor of Media Arts and Sciences, and where he continued to teach and mentor until recently.</p>
<p>Professor Nicholas Negroponte, co-founder and chairman emeritus of the Media Lab, says: “Marvin talked in riddles that made perfect sense, were always profound and often so funny that you would find yourself laughing days later. His genius was so self-evident that it defined ‘awesome.’ The Lab bathed in his reflected light.”</p>
<p>In addition to his renown in artificial intelligence, Minsky was a gifted pianist — one of only a handful of people in the world who could improvise fugues, the polyphonic counterpoint that distinguish Western classical music. His influential 1981 paper “Music, Mind and Meaning” illuminated the connections between music, psychology, and the mind.</p>
<p>Other achievements include Minsky’s role as the inventor of the earliest confocal scanning microscope. He was also involved in the inventions of the first “turtle,” or cursor, for the LOGO programming language, with Seymour Papert, and the “Muse” synthesizer for musical variations, with Ed Fredkin.</p>
<p>Minsky received the world’s top honors for his pioneering work and mentoring role in the field of artificial intelligence, including the A.M. Turing Award — the highest honor in computer science — in 1969.</p>
<p>In addition to the Turing Award, Minsky received honors over the years including the Japan Prize; the Royal Society of Medicine’s Rank Prize (for Optoelectronics); the Optical Society of America’s R.W. Wood Prize; MIT’s James R. Killian Jr. Faculty Achievement Award; the Computer Pioneer Award from IEEE Computer Society; the Benjamin Franklin Medal; and, in 2014, the Dan David Foundation Prize for the Future of Time Dimension titled “Artificial Intelligence: The Digital Mind,” and the BBVA Group’s BBVA Foundation Frontiers of Knowledge Lifetime Achievement Award.</p>
<p>Minsky is survived by his wife, Gloria Rudisch&nbsp;Minsky, MD, and three children: Henry, Juliana, and Margaret Minsky. The family requests that memorial contributions be directed to the Marvin Minsky Foundation, which supports research in artificial intelligence, including support for graduate students.</p>
<p>A celebration of Minsky’s life will be held at the MIT Media Lab later this year.</p>
Marvin MinskyPhoto: Marie CosindasObituaries, Media Lab, Computer Science and Artificial Intelligence Laboratory (CSAIL), Faculty, Artificial intelligence, Electrical Engineering & Computer Science (eecs), School of Engineering, School of Architecture and PlanningWatch your tonehttp://news.mit.edu/2016/startup-cogito-voice-analytics-call-centers-ptsd-0120
Voice-analytics software helps customer-service reps build better rapport with customers. Wed, 20 Jan 2016 00:00:00 -0500Rob Matheson | MIT News Officehttp://news.mit.edu/2016/startup-cogito-voice-analytics-call-centers-ptsd-0120<p>Customer service calls can be frustrating for consumers and agents alike. But MIT spinout Cogito believes it can use behavioral analytics to make those experiences less onerous.</p>
<p>Cogito has developed voice-analytics software for call centers — refined through years of research that focused on human behavior —&nbsp;that tracks, in real-time, voice patterns of customers and agents, and offers feedback to make the conversations more productive.</p>
<p>By doing so, Cogito also aims to make millions of call center workers happier and more productive. According to the U.S. Bureau of Labor Statistics, about 5 million of 146 million workers in the U.S. are employed in call centers. That’s roughly one out of every 25 Americans.</p>
<p>Cogito recently secured funding in November to develop technology for customer-service applications. The company also continues its history of using the technology to monitor mental health.</p>
<p>In December, Cogito partnered with the U.S. Department of Veterans Affairs to detect signs of post-traumatic stress disorder (PTSD) symptoms in returning soldiers. For this and other mental-health applications, the company created a mobile app to passively monitor smartphone sensors to detect behavioral information from voice recordings and texting, while prompting participants to fill out surveys about their mental health. Analyzing this data can reveal behavioral patterns,&nbsp;such as withdrawal or lethargy, that assessed indicated a user’s mental health.</p>
<p>If symptoms are detected, “we will develop feedback mechanisms so that organizations, that care for [these] populations, and individuals and care teams that care for [these] populations can get ahead of risks,” says co-founder and CEO Joshua Feast MBA ’07.</p>
<p><strong>Bridging the communication gap</strong></p>
<p>Each day, companies field hundreds of millions of phone calls from customers, which, according to Cogito, impact customer decisions in buying goods and services from the companies.</p>
<p>Cogito created software called Cogito Dialog that monitors speech subtleties, such as long pauses, interruptions, conversation flow, vocal strain, or speedy chatter. Analyzing voice signals, the software determines customer engagement by tracking, for instance, if callers sound annoyed, disinterested, or confused. Speaking fast or interrupting, for instance, may indicate annoyance; an unusual series of pregnant pauses could indicate disapproval or lack of comprehension.</p>
<p>The software will also notify the agent if they’re building rapport with a customer, accounting for various voice signals, such as proper pacing, speaking with confidence, and expressing empathy for the customers’ situation.</p>
<p>Feast says the software represents a “win-win” for customer service: It provides objective guidance for employees to be more productive and feel more valuable. In turn, customers are happier speaking with empathetic, engaged workers. “Through our voice analysis, we can help bridge the communication gap between customers and agents,” he says.</p>
<p>Cogito’s recent case study with a large health care insurance provider, where agents using Cogito spoke with 300,000 members, revealed some issues that the software could address. Members perceived agents as too dominating in conversations and so avoided signing up for valuable services. Meanwhile, agents themselves were spending too much time on uninterested members. After the study, the firm saw an increase in customer enrollment of 4 percent, which adds up to a few million dollars in profits. Call times dropped 23 percent and customer engagement during the calls increased 25 percent.</p>
<p><strong>Quantifying “honest signals”</strong></p>
<p>Cogito’s aim is to provide a quantifiable aid for “natural intuition,” says Cogito co-founder and MIT Media Lab professor Alex “Sandy” Pentland, the Toshiba Professor of Media Arts and Sciences and director of the Human Dynamics Laboratory. “It’s aiding that intuitive understanding we have when we listen to people —&nbsp;helping people do that better,” he says.</p>
<p>Pentland spent years developing the core technology for Cogito at MIT. In 2001, Pentland was in India launching Media Lab Asia, a program to bring advanced information technologies to that continent. Initial meetings with his co-founders were rife with disagreements and misunderstandings.</p>
<p>“I noticed a lot of the meetings we had, particularly the board of directors, were awful,” Pentland says, laughing. “Although the words were all good, how people said things just didn’t work.”</p>
<p>Invested in measuring how people talk —&nbsp;not what they say — Pentland began a years-long study into quantifying “honest signals,” subtle cues in speech pattern, tone, and body language that determine conversation outcomes.</p>
<p>To do so, Pentland and other MIT researchers developed sensor-packed name badges, called sociometers, which tracked patterns in people’s movements and voices during conversations. “You could see if someone was listening, if someone was interested, who was dominant in a conversation — all by listening to how they said things,” Pentland says.</p>
<p>Analyzing this data, without knowing the content of the conversations, the researchers predicted with 70 to 80 percent accuracy the outcomes of interactions such as job interviews, investment pitches, and even scoring a second date.</p>
<p>Other studies using the technology involved monitoring depression symptoms in patients, and tracking conversations between patients and doctors to ensure both parties understood each other.</p>
<p>Cogito was launched in 2007, when Feast, then a student at the MIT Sloan School of Management, took a class with Pentland&nbsp;“and thought it would be wonderful to use the concepts to help large organizations manage their customers,” Feast says. (By this time, they had ditched the sociometers for equally sensor-packed smartphones, which had just hit the market.)</p>
<p><strong>Seeing the risks</strong></p>
<p>In the beginning, the company focused primarily on health care. Numerous grants, including from the U.S. Department of Veterans Affairs, the Defense Advanced Research Projects Agency, the National Institute of Mental Health, and Massachusetts General Hospital have helped Cogito develop its app to monitor depression symptoms in patients.</p>
<p>These applications helped the company prove out its technology for customer service, Feast says. “Behavioral, and specifically voice, analytics are very complex,” he says. “If you want to help improve the phone conversations between parties, you need to ensure you have an application that truly works and can interpret human behavior. That is why our background and continued innovation in behavioral health is so unique and so valuable.”</p>
<p>On April 15, 2013, Cogito found an unexpected use for the technology: monitoring how people with mental illnesses responded to the Boston Marathon bombing.</p>
<p>At the time, Cogito was conducting a DARPA-funded clinical trial on volunteers with PTSD, bipolar disorder, or other mental illnesses. They compared the behavior of the participants before and after the bombing, analyzing, for example, changes in voice and sleeping patterns, and increases in physical and social isolation — as indicated by lack of texting, calls, or movement.</p>
<p>In two weeks following the incident, the company found survey participation dropped by roughly 50 percent, potentially indicating PTSD-related withdrawal. Participants that did respond to surveys reported a 14 percent increase in severity in attitudes and behaviors linked with depression and PTSD.</p>
<p>Currently, Cogito is seeking to publish a more comprehensive paper on the study, which the company hopes will provide insight into the psychological effects of people with mental illnesses following population-level trauma, and help identify the factors that make some people more resilient than others. “We think it’s important for the world that we put that information out there,” Feast says.</p>
<p>In the future, Feast says Cogito sees the technology as having more far-reaching implications for improving conversations. “[T]he platform is capable of analyzing conversations between any parties and can ultimately help people communicate more effectively,” he says.</p>
MIT spinout Cogito developed voice-analytics software for call centers that tracks and analyzes voice patterns of customers and agents — such as interruptions and rapid speech — and offers real-time feedback to make the conversations more productive.Innovation and Entrepreneurship (I&E), Startups, Alumni/ae, Faculty, Big data, Data, Behavior, Analytics, Software, Psychology, Behavioral economics, Media Lab, School of Architecture and Planning, Sloan School of Management, Mental health