MIT News - Media Labhttp://newsoffice.mit.edu/rss/topic/media-lab
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.enThu, 23 Jul 2015 00:00:00 -0400Understanding economic behavior through hygiene http://newsoffice.mit.edu/2015/student-profile-reshmaan-hussam-0723
PhD student Reshmaan Hussam’s study of Bangladeshis’ economic behavior leads to research on hand-washing. Thu, 23 Jul 2015 00:00:00 -0400Julia Sklar | MIT News correspondenthttp://newsoffice.mit.edu/2015/student-profile-reshmaan-hussam-0723<p>Graduate student Reshmaan Hussam has always seen economics as more than a collection of numbers: For her, it also entails history, health, and human behavior. Now, as a fifth-year PhD student in economics at MIT, she applies this outlook to understanding sanitation and hygiene behavior in the developing world, with an eye toward affecting policy and behavioral changes.&nbsp;</p>
<p>Economics first piqued Hussam’s curiosity in high school, when a summer course exposed her to the experimental and behavioral aspects of the field; since then, she’s kept empathy at the forefront of her work in modeling human economic interactions. And after picking up Max Weber’s “The Protestant Ethic and the Spirit of Capitalism” in college, she became acutely interested in how religion impacts economic choices.</p>
<p>Growing up with an uncle who managed a microfinance institution in Bangladesh — where Hussam still has family — gave her early exposure to this particular population’s financial needs and behaviors; as an undergraduate at MIT, she soon turned her research toward development economics. Working with Abhijit Banerjee, the Ford International Professor of Economics at MIT, and Nava Ashraf at Harvard Business School, Hussam embarked on a project to understand how religious sense informs microcredit savings decisions amongst the poor in Bangladesh.</p>
<p>But she quickly found the topic difficult to study: There were many other factors at play in this type of economic decision-making, not least of which was gender roles within a household. Hussam found that while microloans are often marketed to women, their husbands routinely handle the funds. Studying money-saving behavior was nearly impossible when it was unclear whose behavior was being studied — the borrower’s, or her husband’s.</p>
<p>“There is so much valuable work to be done in women’s financial and social empowerment, but how do we capture the right outcomes?” Hussam says. “Maybe there are more measurable, tangible, or direct ways to improve the well-being of these individuals.”</p>
<p><strong>Health takes center stage</strong></p>
<p>Among the many factors that affect economic decision-making is health. Hussam quickly realized that a key means to self-empowerment is empowerment in health and hygiene — where women, particularly mothers, often play a significant role.</p>
<p>“When you’re sick, that becomes your entire focus,” she says. “Repeated, preventable illnesses — with which the developing world is too familiar — have huge, long-term physical and cognitive consequences. Education, labor, and financial security suffer — all of which are channels to self-determination and empowerment.”</p>
<p>Hussam is no stranger to the health concerns of the rural poor in Bangladesh; her father, a chemist, designs inexpensive arsenic filters that are sent to the country to provide people with safe drinking water. But despite the availability of such filters and other sanitation technology, child and infant mortality, by preventable bacterial and viral contamination, remain high. So she turned her research focus to a very simple, but far-reaching, hygiene behavior: handwashing with soap. It is, Hussam says, a low-cost, high-return activity that can vastly improve quality of life and equip people with the health stability to focus on other activities, such as education or building up financial savings.</p>
<p><strong>Setting up the study</strong></p>
<p>She began working with MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL) and a public health organization in the Indian state of West Bengal to design a way to study the barriers that discourage many Bengalis from taking up handwashing with soap.</p>
<p>“Every home has soap, and everyone knows that handwashing with soap is important, yet hardly anyone does it,” Hussam says. “Existing public health campaigns don’t ask why. If we want to see progress on these simple but valuable preventable health activities, we need to understand the behavioral reasons for why people aren’t taking up [healthy habits].”</p>
<p>Hussam designed a study to find out what would help transform handwashing into a habit in West Bengal. Working with the MIT Media Lab, she designed a soap dispenser with a simple sensor that can log when and how often the dispenser is used. Starting this August, Hussam will give soap to 3,000 households in West Bengal, 1,400 of which will receive the sensor-equipped dispensers.</p>
<p>Among the 1,400 households using the sensing dispenser, some will be told that someone will be monitoring their handwashing and reporting behavior to the household; others will be given incentives to wash their hands; and still others will be reminded to wash by an alarm clock set for dinnertime. After three months, all incentives, monitoring, and reminders will officially stop, but the dispensers will still log activity.</p>
<p>Hussam will continue to track handwashing in the households for a year. She hopes to discover what type of encouragement works best to inspire habitual handwashing, ensuring that it continues once external interventions are gone. Her eventual findings, she believes, will inform evidence-based policy that is tailored to human behavior, and therefore more likely to effect change on a large-scale.</p>
<p>This year, she wraps up her PhD at MIT; she then plans to join Yale University as a postdoc. There, she will continue her handwashing study through J-PAL’s sister lab, Innovations for Poverty Action.</p>
<p>Hussam intends to pursue development economics research in academia, with the intent of working in a field where her research will be policy-relevant, never just theoretical. And she hasn’t lost sight of her initial interest in the social norms that impact women’s empowerment in developing nations.</p>
<p>“All of these aspects of welfare are completely intertwined,” she says. “I have a feeling I’ll be coming full circle eventually.”</p>
Reshmaan HussamResearch, SHASS, Economics, Profile, Students, graduate/postdoctoral, Developing countries, Asia, Public health, Bangladesh, IndiaTackling challenges at the intersection of engineering and sportshttp://newsoffice.mit.edu/2015/engineering-challenges-sports-tech-day-0624
Sports companies connect with engineering students and faculty at the second annual STE@M Day.Wed, 24 Jun 2015 11:48:01 -0400Department of Mechanical Engineeringhttp://newsoffice.mit.edu/2015/engineering-challenges-sports-tech-day-0624<p>The second annual STE@M Day, hosted by Sports Technology and Education @ MIT (STE@M), took place last month in the MIT Z-Center, following a day of lab tours for STE@M affiliate companies.</p>
<p>Representatives from sports companies such as Patagonia, Nike, Eastman, Sheico, and PGA, along with sports-related foundations and MIT startups — including Ministry of Supply, a company co-founded by Department of Mechanical Engineering (MechE) alumnus Kevin Rustagi ’11 and Department of Chemical Engineering alumnus Gihan Amarasiriwardena ’11 — toured a combination of 14 labs across six departments, including MechE, Civil and Environmental Engineering, Electrical Engineering and Computer Science, Materials Science and Engineering, Aeronautics and Astronautics (AeroAstro), and the Media Lab. They learned about everything from xylem filters in Professor Rohit Karnik’s lab and optically responsive fibers in Professor Mathias Kolle’s lab to self-assembly in Department of Architecture Professor Skylar Tibbits’ lab and 3-D body mapping in AeroAstro Professor Dava Newman’s lab.</p>
<p>“The heart of STE@M is a community of people who are passionate about tackling the challenges that lie at the intersection of engineering and sports,” says the group’s leader, Professor Peko Hosoi. "STE@M Day is an event to celebrate this community, to bring all of our partners to campus, and to catalyze interactions between students, faculty, industry affiliates, and athletes whose paths might not normally intersect.”</p>
<p>The daylong event connected affiliate companies with relevant researchers and enabled engineering students interested in sports technology to network with potential employers, who presented some of their state-of-the-art technologies during an engineering “petting zoo” that took place in the afternoon. Students, faculty, and staff had the chance to get up close and personal with the technology and converse with company representatives about the engineering challenges they faced in research and development. Students from the course 8.01S (Sports Physics) also showcased their experiments and final projects to attendees of the petting zoo.</p>
<p>“The spontaneous, personal conversations that arise at these events are absolutely essential for the genesis of fresh, relevant new projects and ideas,” Hosoi says.</p>
STE@M Day at MITSports, Mechanical engineering, Civil and environmental engineering, Aeronautical and astronautical engineering, Electrical Engineering & Computer Science (eecs), Media Lab, Students, Special events and guest speakers, IndustryEducation is focus of Lemann gift to MIT http://newsoffice.mit.edu/2015/lemann-gift-public-education-brazil-0616
New partnership will support high-impact initiatives aimed at improving public education in Brazil.Tue, 16 Jun 2015 14:53:01 -0400Elizabeth Thomson | Resource Developmenthttp://newsoffice.mit.edu/2015/lemann-gift-public-education-brazil-0616<p>MIT has received a gift from the <a href="http://www.fundacaolemann.org.br/en/" target="_blank">Lemann Foundation of Brazil</a> to partner with the foundation on three high-impact initiatives aimed at improving public education for all in Brazil.</p>
<p>The gift will specifically support:</p>
<ul>
<li>a collaboration with the MIT Media Lab to cultivate creative learning in Brazil, providing young people with new opportunities for coding, making, and learning with new technologies;</li>
<li>three fellowships for the support of MIT graduate students from Brazil or working in Brazil to study education, educational technology, and related areas of focus; and</li>
<li>a seed fund for collaborative research between MIT faculty and their counterparts at Brazilian universities and research centers.</li>
</ul>
<p>“By pairing MIT’s focus on educational innovation and the Lemann Foundation’s expertise in Brazilian educational reform, this collaboration will create exciting research avenues for our faculty and students that will help make education more effective and engaging for learners across Brazil,” MIT President L. Rafael Reif says.</p>
<p>"We truly believe in this partnership as an excellent way to encourage talented Brazilian researchers who are committed to helping foster change in education by focusing their studies on educational technologies," says Denis Mizne, CEO of the Lemann Foundation.</p>
<p><strong>Creative learning</strong></p>
<p>The Media Lab is a world leader in the development of new technologies and strategies for cultivating creative learning. In its collaboration with the Lemann Foundation, the lab will help implement these ideas in Brazil.</p>
<p>For example, the Media Lab views coding as a new form of literacy. Just as learning to write is valuable for everyone (not just those aspiring to write professionally), the lab believes that learning to code is similarly fundamental. To that end, it has developed a variety of tools, such as Scratch, a programming language that enables young people to create their own interactive stories, games, and animations — and to share their projects with one another in an online community.</p>
<p>In the MIT-Lemann collaboration, researchers aim to promote a culture of coding in Brazil by, for example, developing new Scratch features, resources, and activities to support the scaling of coding in Brazil, especially for children.</p>
<p>In addition, the Media Lab has been at the forefront of developing the intellectual foundations of the maker movement and some of its leading technologies, such as Lego Mindstorms robotics sets. The lab will collaborate with the Lemann Foundation to support maker initiatives in Brazil — such as a project to adapt Circuit Stickers, peel-and-stick electronics for crafting circuits, for Brazilian use.</p>
<p>In conjunction with these efforts, the Media Lab will also host visits from Brazilian educators and researchers. Through the Lemann Creative Learning Fellows Program, these professionals will experience the innovative culture of the Media Lab so that they can establish similar environments in Brazil.</p>
<p><strong>Finding solutions to Brazilian educational practices</strong></p>
<p>The Lemann Education Fellowships support research related to education, educational reform, or technologies for education by Brazilian and non-Brazilian students (with the latter doing research in Brazil). The goal is to create a rich, multidisciplinary environment for students to find solutions to Brazilian educational challenges.</p>
<p>The inaugural Lemann Fellows are Bruno Santos, an MBA candidate at the MIT Sloan School of Management; Susana Cordeiro Guerra, a PhD student in political science, and Juliana Cavalcante, who is also pursuing her MBA through MIT Sloan.</p>
<p>Santos, the founder of a startup that works to improve democracy in Brazil by educating and engaging citizens, will focus his MIT research on the intersection between entrepreneurship, economics, and politics. Guerra was previously a Lemann Fellow at Harvard University; her experience there helped inspire her PhD work at MIT on the role that political and administrative factors play in the process of implementing educational reforms. Most recently, Cavalcante worked for the Lemann Foundation; her MIT research will focus on improving education management in Brazil with the goal of ensuring that every student in Brazil has access to quality education.</p>
<p>The MIT International Science and Technology Initiatives (MISTI) is MIT's flagship international education program. There are 18 MISTI country programs, including MIT-Brazil. Key to MISTI is its Global Seed Funds program, which supports research collaborations between MIT faculty and international colleagues.</p>
<p>The Lemann Seed Fund for Collaborative Projects will support research pursued jointly by MIT and Brazilian researchers. The goal is to expand on the MIT-Brazil program to develop a community of entrepreneurs, scientists, and managers who are equipped to advance education in Brazil.&nbsp;</p>
Giving, Education, teaching, academics, Media Lab, Brazil, Awards, honors and fellowships, MISTI, International initiatives, STEM educationNational Geographic identifies six from MIT community as &quot;Emerging Explorers&quot;http://newsoffice.mit.edu/2015/national-geographic-six-mit-emerging-explorers-0612
Students, researchers, and alumni among 14 honored as &quot;young trailblazers.&quot;Fri, 12 Jun 2015 14:05:01 -0400School of Architecture and Planninghttp://newsoffice.mit.edu/2015/national-geographic-six-mit-emerging-explorers-0612<p>The <a href="http://mitsha.re/1JOiUkz">2015 National Geographic Emerging Explorers program</a> has honored six explorers with ties to the MIT community. The program, which defines explorers broadly to include scientific and intellectual exploration, seeks to identify and support “young trailblazers whose ideas are helping change the world.”</p>
<p>“Our emerging explorers are inspiring young visionaries who are looking at ways to remedy global problems and are undertaking innovative research and exploration,” says Terry Garcia, National Geographic’s chief science and exploration officer, <a href="http://mitsha.re/1FYiJh4" target="_blank">in an announcement</a> on the organization’s website.&nbsp;“They will help lead a new age of discovery.”</p>
<p>The class of 2015 includes 14 individuals in fields ranging from paleoanthropology and urban agriculture to biomedical engineering and marine conservation. The six Emerging Explorers with ties to MIT are:</p>
<p><strong>Leslie Dewan SB ’07, PhD ’13</strong><br />
An MIT-trained nuclear engineer and self-described environmentalist, Dewan believes <a href="http://www.nationalgeographic.com/explorers/bios/2015/leslie-dewan/" target="_blank">nuclear power is the best way</a> to produce large amounts of carbon-free energy and combat climate change. Her company, <a href="http://www.transatomicpower.com/" target="_blank">Transatomic Power</a>, has designed a&nbsp;molten salt reactor that&nbsp;could&nbsp;safely and efficiently convert existing stockpiles of&nbsp;nuclear waste into enough electricity to power the world for 72 years.</p>
<p><strong>Caleb Harper MA ’14</strong><br />
Harper seeks to <a href="http://www.nationalgeographic.com/explorers/bios/2015/caleb-harper/" target="_blank">reimagine agriculture</a> by adapting it to produce high-quality food in the heart of future cities. He is the founder and lead researcher of <a href="http://mitcityfarm.media.mit.edu/" target="_blank">CityFARM</a>, part of the Changing Places group in the MIT Media Lab.</p>
<p><strong>Manu Prakash SM ’05, PhD ’08</strong><br />
Cited for his vision for “<a href="http://www.nationalgeographic.com/explorers/bios/2015/manu-prakash/" target="_blank">frugal science</a>,” Prakash designs inexpensive laboratory instruments, such as the Foldscope, a microscope made of paper, for use in the developing world. Now an assistant professor at Stanford University, Prakesh did his graduate work at the MIT Media Lab.</p>
<p><strong>Steve Ramirez</strong><br />
Ramirez is a PhD candidate in MIT's <a href="http://bcs.mit.edu/" target="_blank">Department of Brain and Cognitive Sciences</a>. After a cousin suffered a brain injury during childbirth and lapsed into a coma, Ramirez became fascinated by <a href="http://www.nationalgeographic.com/explorers/bios/2015/steve-ramirez/" target="_blank">how the brain works</a>. Today, he explores how memory functions — research that one day may be used to treat depression or posttraumatic stress disorder (PTSD) — or that may even lead to a cure for Alzheimer’s.</p>
<p><strong>David Sengeh SM ’12</strong><br />
National Geographic labeled Sengeh a “Renaissance man” for activities that range from rapping to heading a global charity to <a href="http://www.nationalgeographic.com/explorers/bios/2015/david-moinina-sengeh/" target="_blank">developing the next generation of prosthetic sockets</a> and wearable mechanical interfaces. He is currently a PhD student in the <a href="http://www.media.mit.edu/research/groups/biomechatronics" target="_blank">Biomechatronics Group</a> in the MIT Media Lab.</p>
<p><strong>Skylar Tibbits SM ’10</strong><br />
Tibbits explores <a href="http://www.nationalgeographic.com/explorers/bios/2015/skylar-tibbits/" target="_blank">programmable materials and “4-D printing”</a> — materials that transform over time by reconfiguring, changing shape, or adapting to their environment. Trained in both architecture and design computation, he directs the <a href="http://www.selfassemblylab.net" target="_blank" title="ප">Self-Assembly Lab</a> in the MIT Department of Architecture.</p>
<p>Each of the honorees receives $10,000 to support his or her work. National Geographic is promoting the program and the work of its 2015 class with a series of activities from June 8-12 called Explorers Week.</p>
<p>To read interviews with the honorees, visit <a href="http://mitsha.re/1JOiUkz" target="_blank">National Geographic’s Emerging Explorers Web page</a>.</p>
MIT National Geographic Emerging ExplorersAwards, honors and fellowships, Students, Graduate, postdoctoral, Alumni/ae, School of Architecture and Planning, School of Science, School of Engineering, Architecture, Brain and cognitive sciences, Nuclear science and engineering, Media Lab3 Questions: Economies as computers, products as informationhttp://newsoffice.mit.edu/2015/3-questions-economies-computers-products-information-0609
New book argues that economic development is a special case of the growth of information.Tue, 09 Jun 2015 00:00:00 -0400Larry Hardesty | MIT News Officehttp://newsoffice.mit.edu/2015/3-questions-economies-computers-products-information-0609<p><em>Cesar Hidalgo, </em><em>the </em><em>Asahi Broadcasting Corporation Associate Professor of Media Arts and Sciences at the MIT Media Lab</em><em>, has a PhD in statistical physics, but he’s applied the tools of that discipline to topics ranging from the </em><a href="http://newsoffice.mit.edu/2014/network-maps-global-fame-different-language-speakers-1216"><em>dissemination of cultural information</em></a><em> to </em><a href="http://newsoffice.mit.edu/2011/profile-hidalgo-0207"><em>economic development</em></a><em>. In 2012, he signed a contract with Basic Books to write a book about his views on economic development.</em></p>
<p><em>But once he started writing, he began to think of economic development as an aspect of a more general phenomenon: the growth of physical order, or information. In the end, a description of his research on economic development constitutes <a>the final fourth </a></em><em>of a book titled “Why Information Grows: The Evolution of Order, from Atoms to Economies,” published this month. Hidalgo discussed the book with </em>MIT News.</p>
<p><strong>Q. </strong>How are you using the term “information”?</p>
<p><strong>A.</strong> I use information to refer to raw physical order. At the beginning of chapter two, I give the example of the world’s most expensive car, a Bugatti Veyron, which a Chilean bought for $2.5 million. Imagine that you just won that car in the lottery, and you crash into a wall and total it. Now how much is that Bugatti Veyron worth? You don’t need a PhD in economics to know that the value dropped considerably. But what changed? Well, the atoms of the car did not change. What changed was the way in which those atoms were connected. That order is information.</p>
<p>So eventually, everything in our economy involves concoctions of physical order, and economies are nothing other than the distributed computers that compute that physical order.</p>
<p><strong>Q. </strong>So why does information grow?</p>
<p><strong>A.</strong> The universe has this tendency of averaging itself out: Heat flows from hot to cold, and music vanishes as sound waves travel through the air. So how the heck does the universe get to have a planet like Earth, that is so rich in information, and that at the same time is governed by the second law of thermodynamics?</p>
<p>One of the main clues to answering that question came from Ilya Prigogine, who won the 1977 Nobel Prize in chemistry. He was the first one to solve a statistical-physics system that was out of equilibrium. And he found that, in out-of-equilibrium systems, order emerges naturally.</p>
<p>The basic example of that is a bathtub full of water. When you take out the plunger, you have a little whirlpool. That whirlpool is the steady state of a system that is out of equilibrium. It has a lot of correlations, because the velocity of the molecules that are neighboring each other in the whirlpool tends to be the same.</p>
<p>But that mechanism can’t explain more complex forms of information, so there have to be a few more mechanisms at play. The second ingredient is that for information to endure, it has to be deposited in solids. Think about your body: Obviously, if you cut a person, it’s very juicy inside. But in reality, a person is not that juicy, because at a much finer scale there are aperiodic crystals — like DNA, RNA, and proteins — that are technically solids. Because they’re solids, they’re able to shield the order in them against entropy for a very long time. So the second ingredient for the growth of information is to have solids that allow information to last so that you can recombine it and create more information.</p>
<p>The final ingredient is that matter needs to develop the capacity to compute. All life forms are computers that are ingesting physical order and generating physical order, whether it’s at the molecular level, in the case of the cells, or in the nervous system, like we do.</p>
<p>So together, out-of-equilibrium systems, solids, and the computational capacities of matter explain the physical origins of information.</p>
<p><strong>Q. </strong>How does all of this tie into your research on economic growth?</p>
<p><strong>A.</strong> All systems have a finite capacity to accumulate information and a finite capacity to compute. At some point, even multicellular organisms like us run out of capacity. So to make information grow, you need to have a team or a group or a society with a higher capacity to generate information. That makes the problem of economic growth a particular case of the growth of information, because economic growth involves the capacity to generate physical order. In the case of economies, this is the capacity to generate order that has economic value.</p>
<p>So the question then is, “How do people form the networks they need to increase their computational capacity?” In traditional economic approaches, people see these networks as an epiphenomenon of economic activity. You have something that I want to buy, so we connect and make a transaction; then we go our separate ways. But this is an oversimplification. What sociologists, like Mark Granovetter, showed is that there is a lot of pre-existing social structure, like families, that embed economic activity.</p>
<p>This pre-existing social structure, and social institutions such as trust, are important for the accumulation of computation capacity. As Francis Fukuyama argued in his &nbsp;book “Trust,” societies that differ in their levels of trust gravitate to different types of industries. On the one hand, you have familial societies, in which people don’t trust each other. Here, companies tend to be managed by a relatively small group of people who are all related by family. Their computational capacity is modest. They’re going to gravitate toward industries like agriculture or extractive industries or finance or retail, where basically you can have a few brothers manage the business.</p>
<p>In trust-based societies, where people trust each other more, the cost of links is lower, and people tend to form large networks with non-kin. In those non-kin societies, you have, let’s say, a Steve Jobs and a Steve Wozniak and a Jonny Ive. You have a diverse pool of talent that creates large networks that gravitate toward complex industries where humans crystallize a lot of imagination. In trust societies, people gravitate towards the aerospace industry, car manufacturing, electronics, pharmaceuticals — the most complex industries, because they’re able to form larger networks that can accumulate more computation.</p>
<p>So countries with a lot of trust and good institutions can create very complex computers that are able to process large volumes of information and create complex products that are rare and have a big premium on the market. So by thinking of economies in terms of information and computation, you can also connect institutions with the mix of products that countries make and with wealth. A social network is nothing other than a distributed computer.</p>
Research, Faculty, School of Architecture and Planning, Media Lab, Books and authorsFaculty highlight: W. Craig Carterhttp://newsoffice.mit.edu/2015/faculty-highlight-craig-carter-0601
Materials science professor develops algorithms to solve problems across disciplines, strengthens online teaching techniques, and contributes to scientific art.
Mon, 01 Jun 2015 18:07:01 -0400Denis Paiste | Materials Processing Centerhttp://newsoffice.mit.edu/2015/faculty-highlight-craig-carter-0601<p>Whether he's tackling thermodynamics and kinetics of batteries, modeling solid-state dewetting, or undertaking an artistic collaboration, MIT Professor&nbsp;<a dir="ltr" href="http://dmse.mit.edu/faculty/profile/carter" target="_blank">W. Craig Carter</a>&nbsp;brings a mathematical approach to solving problems and creating new work, developing fresh algorithms for each venture. He also is developing new paradigms for online materials science education, melding factual instruction with critical thinking and programming skills.</p>
<p>"I've gone from topic to topic pretty rapidly, and it kind of stems from an applied mathematical bent that I've always had in my career," says Carter, 54, the POSCO Professor of Materials Science and Engineering. "That gives you the ability to jump into a topic, find what problems are useful to be solved, and either do kind of a theoretical development or do simulations which shed insight onto materials phenomena."</p>
<p>On the scientific side, Carter collaborates closely with fellow Department of Materials Science and Engineering (DMSE) Professor&nbsp;<a dir="ltr" href="http://dmse.mit.edu/faculty/profile/Chiang" target="_blank">Yet-Ming Chiang</a>, whose experimental prowess complements Carter's computational skills, while on the artistic side, Carter partners frequently with associate professor of media arts and sciences&nbsp;<a dir="ltr" href="http://matter.media.mit.edu/people/bio/neri-oxman" target="_blank">Neri Oxman</a>&nbsp;in creating nature-inspired sculptural objects.</p>
<p>Carter's recent projects include:</p>
<p>• developing the Materials Science Curriculum 2.0, using <a dir="ltr" href="http://www.wolfram.com/mathematica/" target="_blank">Wolfram Mathematica</a> notebooks and the Wolfram Language as an integral part of proctored tutorials that engage students and build skills interactively;</p>
<p>• analyzing decision-making to integrate intermittent green-power sources such as wind and solar into the always-on power grid;</p>
<p>• understanding battery fatigue, assisted by postdoc Giovanna Bucci's work simulating the failure mechanisms in battery microstructures and solid-oxide fuel cells;</p>
<p>• modeling dewetting phenomena in thin films, supervising PhD student <a dir="ltr" href="https://mpc-www.mit.edu/component/k2/item/536-modeling-solid-state-thin-film-dewetting" target="_blank">Rachel V. Zucker</a>'s creation of Wulffmaker and other software solutions; and</p>
<p>• a personal project to restore a 14th-century structure in the Burgundy region of France.</p>
<p><strong>Retooling education</strong></p>
<p>Distance education can cover a range of needs, from the MIT student continuing classes toward a degree while out of the country to the online learner in a massive open online course (MOOC) seeking enrichment. "MOOCs are very good for some things, but I don't think they're very good for higher education or developing critical thinking skills," Carter says. In contrast to MOOCs, Carter takes a different approach for materials science by building courses around a "master class" model of students interacting with an instructor. That means enrollments are necessarily smaller and instructor time has to be dedicated to proctoring the online class. His proctored scaffolding framework for online coursework uses modules that incorporate&nbsp;Wolfram Mathematica&nbsp;programming.</p>
<p>Carter has developed online tutorials for several material science courses, and this spring semester offered his <a dir="ltr" href="http://student.mit.edu/catalog/search.cgi?search=3.017&amp;style=verbatim&amp;when=*&amp;termleng=4&amp;days_offered=*&amp;start_time=*&amp;duration=*&amp;total_units=*" target="_blank">3.017</a> (Modeling, Problem Solving, Computing, and Visualization) course exclusively online to students at MIT.</p>
<p>Using a&nbsp;<a dir="ltr" href="http://www.webex.com" target="_blank">WebEx</a>&nbsp;interface, Carter says, builds student skills in video conferencing, while the Mathematica component builds programming skills. Both are necessary professional skills for today's engineering students, he says. "This class combines programming, mathematics, visualization, and prototype problems in materials science, and puts them all together and gives the students very challenging problems to solve. I think that solving these problems is a very effective way not only to learn the subject but to develop skills which will transfer to into many different aspects of their professional lives," Carter says.</p>
<p>"My goal is to develop a very holistic curriculum for materials science and do a job that is the best I could possibly do," he says. "In developing the methods for materials science, hopefully it'll create a prototype for other STEM disciplines to follow."</p>
<p>Carter previously received a&nbsp;<a dir="ltr" href="http://blog.wolfram.com/2012/11/19/wolfram-innovator-awards-2012/" target="_blank">Wolfram Innovator Award</a>&nbsp;for his use of Mathematica in the classroom. "What he's trying to do is really create a platform for learning that engages the students, makes them think in that distance learning environment," says Wolfram business development associate Adriana O'Brien. "We're always delighted to see him, especially at our technology conferences. He always brings something new to the table for other higher education users and professors that are creating digital content with Mathematica."</p>
<p>Carter has been assisted by Rachel Zucker, who earned her PhD this spring, and recently recruited Kyle Keane, a former Wolfram employee, to work on the&nbsp;<a href="http://mpc-www.mit.edu/component/k2/item/537?Itemid=537" target="_blank">MSC 2.0</a> project.</p>
<p><strong>Power grid management</strong></p>
<p>With Yet-Ming Chiang and Throop Wilder, Carter co-founded&nbsp;<a dir="ltr" href="http://www.24-m.com/company.html" target="_blank">24M</a>, a company with a mission of making low-cost, high-energy-density batteries for power grid and transportation markets. While 24M's flow battery technology is based on innovations in semi-solid electrodes, Carter has also has been developing economic models about how grid operators should make efficient decisions.</p>
<p>"Most governments right now have a system where if a solar plant or a wind plant puts power onto the grid, the utility, or the grid operator, has to consume it. It creates havoc for the grid," Carter explains. The drive toward renewable sources has paradoxically increased greenhouse emissions from gas generators because cycling on and off frequently raises their carbon footprint, Carter says.</p>
<p>Because of these issues, adding battery backup to the grid isn't a plug-and-play operation. Operators need help in choosing the right-size batteries for their needs and guidelines for operating them. Once he recognized the problem, Carter says, he started constructing models for how to purchase and operate a battery. "It's not really materials science, but it is an application of algorithms and modeling, which is my craft," he explains.</p>
<p>While 24M has developed some proprietary battery management code, a number of academic problems remain that Carter is addressing through the MIT-Skoltech Center for Electrochemical Energy Storage (<a dir="ltr" href="http://mpc-www.mit.edu/component/k2/item/523-fostering-u-s-russia-energy-innovation" target="_blank">CEES</a>). "What are the best algorithms in order to optimize battery usage? What could we say about either mathematics or can we improve the algorithms in order to make a much more efficient controller? Those are things that 24M is not interested in, and so it became a project that is part of this Skoltech initiative to do some academic work on that," Carter says. "At MIT, what we're trying to do is develop optimization techniques and then create software based on the mathematics of optimization," Carter adds. Lutao Xie, a graduate student in the interdisciplinary&nbsp;<a dir="ltr" href="http://computationalengineering.mit.edu/" target="_blank">Center for Computational Engineering</a>, is currently working on those optimization techniques for optimal dispatch.</p>
<p><strong>Faculty partnership</strong></p>
<p>Professors Carter, who specializes in modeling, and Chiang, who specializes in experimental work, hold joint research group meetings. "The students will have some part modeling, some part work in the laboratory. We have these joint meetings, where modeling and experiments and data are all discussed on the same footing," Carter explains. "The modeling may include doing data analysis for experiments that are being done or trying to construct models that help us understand that data and what's happening in the experiment."</p>
<p>A current research thrust is understanding the effect of replacing the liquid electrolyte in traditional batteries with solid electrolytes for all solid-states batteries. "There is going to be a mechanical problem associated with how the active electrode particles are mechanically coupled to the electrolyte. One anticipates that the failure modes are going to be delamination of the active battery materials, which convey&nbsp;the ions from the anode to the cathode and vice versa," Carter says.&nbsp;"We're anticipating that we're going to need to quickly do models that are based on coupled physics, electrostatics, mechanics, and chemistry, put these all together, and begin to model how these batteries behave. The difficulty right now is the experimental groups are just developing methods to get the mechanical properties out, so the models right now are moving ahead. We want to have the infrastructure in place that as soon as we have the mechanical properties, we'll be able to plug that directly into the model and start doing comparisons. In the meantime, we'll develop a lot of intuition for how the batteries behave given a set of parameters that describe the mechanical properties."</p>
<p>"The models will help give us a foundation on which we understand the experiments,"&nbsp;Carter predicts. "The experimental results, as they develop, will help direct how we should push the model to look at different kinds of effects. Eventually, there is a convergence where data is coming up ready to plug into a model, and then, I think, we have this nice, complete understanding of how these batteries behave, thus perhaps have some design principles."</p>
<p><strong>Lithium battery stresses</strong></p>
<p>MIT postdoctoral associate&nbsp;<a dir="ltr" href="https://mpc-www.mit.edu/component/k2/item/535-de-stressing-lithium-batteries" target="_blank">Giovanna Bucci</a> is working on understanding battery fatigue by simulating failure mechanisms in solid-state lithium battery microstructures and solid-oxide fuel cells. Bucci's expertise in continuum scale simulation enables her to model mechanical and chemical behavior of battery charging and discharging. Her work shows that mechanical stress is an important problem that cannot be ignored, Bucci says. With a strong background in solid mechanics, Bucci developed non-linear continuum mechanics-based simulations, using finite element analysis and writing computer code, primarily in C++, to model these interactions.</p>
<p>Bucci's background in rational mechanics, a very theoretical and very mathematical branch of mechanical engineering, makes her highly skilled in understanding non-linear mechanics, Carter says. "She's learning materials science very rapidly; I'm very impressed how rapidly she's learning this. So she is becoming this new synthetic scientist, who comes from a very strong mathematical, theoretical background, is picking up materials science, and she's loving the fact that she's working on these problems, which we know really matter."</p>
<p><strong>Dewetting models</strong></p>
<p>Rachel Zucker, who receives her PhD on June 5, has developed a range of mathematical solutions to explain various dewetting phenomena in thin films. Working under co-advisors Carter and&nbsp;<a dir="ltr" href="http://dmse.mit.edu/faculty/profile/thompson" target="_blank">Carl V. Thompson</a>, Zucker published a&nbsp;<a dir="ltr" href="http://dx.doi.org/10.1016/j.crhy.2013.06.005" target="_blank">paper</a>&nbsp;on two-dimensional edge retraction for highly anisotropic, fully-faceted thin ﬁlms. She also created open-source code,&nbsp;<a dir="ltr" href="http://pruffle.mit.edu/wulffmaker/" target="_blank">Wulffmaker</a>, to calculate equilibrium shapes of faceted particles attached to deformable surfaces as well as particles attached to rigid surfaces. "Rachel is another good example of working with experimentalists, and doing theory at the same time, because she was co-advised by Carl and me," Carter says.</p>
<p>Modeling the anistropic surface tension that underlies dewetting phenomena such as edge retraction in thin films is a mathematically hard problem, Carter says. "What she's done is figure out how to formulate many of these problems,&nbsp;solved many sub-problems associated with retraction of a solid-state film, and then is beginning to synthesize and put it all together so that we have enough understanding and enough results that we can compare it to the experimental results that Carl Thompson is getting in his group."</p>
<p>"There is a lot of work now on creating structures on a substrate that are patterned that you can use to do two things. One is you either use the pattern itself to do something interesting, or you can use the pattern to grow something, like a forest of nanowires," Carter says. Zucker's contributions could help provide better control for patterning wires, transistors and other components in thin films ranging from about 100 micrometers (microns) down to about 10 micrometers (equivalent to about 4 thousandths of an inch to 4 ten-thousandths of an inch).</p>
<p><strong>Innovative art works</strong></p>
<p>Carter collaborated with Neri Oxman on a series of&nbsp;<a dir="ltr" href="http://materialecology.com/projects" target="_blank">projects</a> that integrate her artistic visions with Carter's algorithms, 3-D printing, and most recently computerized numerical control (CNC) machining. "The first one was in MoMA and I think that must have been six or seven years ago," Carter recalls. Their most recent project, the "<a href="http://www.materialecology.com/projects/details/gemini" target="_blank">Gemini</a>" chair, blended advanced manufacturing with mathematics, algorithms, and art. It was shown in Paris as well as locally at Le Laboratoire Cambridge this year.</p>
<p>"Craig's taste in thinking is incredibly unique: He combines scientific rigor with open-ended artistic expressions," Oxman says. "His codes are immaculate and, at the same time, he is a storyteller. He can tell a good story: through his code, through his thinking, in the way he teaches, in his gaze."</p>
<p>"I have learned many things through our collaborations; but mostly I learned from Craig that in creative practice — much like the physical world — strength of attitude is strength of character," Oxman explains. "That innocence triumphs over strategy and that bottom-up research is often more gratifying than top-down planning. Our initial conversations about the relationship between material properties and their microstructures have inspired me to think of design fabrication in the same way — prioritizing physical analysis of what is to generate predictive frameworks for material (and structural) behavior, rather than applying abstract — and often reductive — models on what can be. That's how I fell in love with&nbsp;<a dir="ltr" href="http://www.ctcms.nist.gov/oof/" target="_blank">OOF</a>, and how we came up with the new concept of Finite Element Synthesis as an approach for design."</p>
<p>"At its core is the very idea that we can design forms from the bottom up by controlling relationships between intrinsic material properties and extrinsic environmental stimuli. We started with butterfly wings and today we're printing pavilions. OOF and Craig's approach to materials science and engineering inspired in me a way of seeing the world that overlapped with my values, as designer and creator," Oxman says.</p>
<p>From Oxman's corset-like constructions through dresses and the recent chair, Carter provided algorithms that patterned 3-D barnacle-like surfaces lining, or covering, the objects of Oxman's creations. "The way it works is Neri will get a commission and she'll develop something that's conceptual, and usually maybe the basis of a form on which the object will eventually be attached," Carter explains. "The art stuff is something that is between an exercise of my scientific craft and a hobby. It's been a great joy to work with her," he says.</p>
<p>Carter and Oxman will discuss a concept, for example the natural textures of barnacles and extend the concept to something specific such as barnacles clinging to the structure of a corset. "Mostly what I do is think about the algorithms and the mathematics and then write code which combines the original form, this new texture, and it's all done with a very interesting dialogue between Neri, who is the creative force behind the whole idea, and I hope to think that I add some creativity associated with the development of an interesting algorithm and eventually give the shape its final appearance," Carter says. For the barnacle shapes, the algorithm incorporates a certain amount of randomness but the algorithm develops ordered texture, so it becomes readily identifiable as a continuous texture, but also has aspects where it attaches. "So there is a lot of mathematics that goes into that," Carter says.</p>
<p>The Gemini chair combined CNC milling of a wood structure and 3-D printing by Stratsys with Oxman's concepts and Carter's algorithms.</p>
<p>"I can say that — without a doubt — Craig's presence so early in my career has been invaluable and meaningful to me in more than one way. Craig taught me that naiveté is as precious (and perhaps as gentle) as nature and encouraged me to operate across scales and disciplines without any expectation for definitions. He taught me what I needed to know, that questions are far more important than answers, and that questioning is a way of being in the world, whether in art or in science, whether in engineering or design. Throughout our collaborations, we sought complete intellectual openness, one that transcends boundaries and even language. In this sense Craig is a true artist: one that constantly questions, that is not willing to take anything for granted, that enjoys ambiguity, that considers any creative pursuit — whether through science, education, or art — as a journey without the need to define a destination," Oxman says.</p>
<p><strong>Other personal interests</strong></p>
<p>Carter and his wife, Martin Carter, who recently retired as an associate dean at the Boston University School of Management, restored a 14th century structure in the&nbsp;<a dir="ltr" href="http://en.wikipedia.org/wiki/C%C3%B4te_Chalonnaise" target="_blank">Côte de Chalonnais</a>&nbsp;region of Burgundy,&nbsp;<a dir="ltr" href="http://savignysurgrosne.e-monsite.com/pages/le-chateau/description-du-chateau-en-1750.html" target="_blank">Notre Dame de Savigny-sur-Grosne</a>, near the cities Cluny and Taize. "It had no doors, no plumbing, no electricity, no windows — nothing; it was pretty much a roughly organized pile of rocks," Carter says.</p>
<p>The Carters' have two cats, Pip and Squeak, and a dog, Berri.</p>
<p>Carter also has a famous brother, Chris Carter, creator and executive producer of the&nbsp;<a dir="ltr" href="http://www.fox.com/the-x-files" target="_blank">X-Files</a>. "Sometimes, he would ask me to construct some equations that would appear on a blackboard on the show," Carter says.</p>
‪W. Craig Carter‬Faculty, Profile, Materials Science and Engineering, Batteries, online learning, Massive open online courses (MOOCs), Energy, Arts, Algorithms, Media Lab, School of Architecture and Planning, Materials Processing CenterSeeking deeper understanding of how the brain workshttp://newsoffice.mit.edu/2015/faculty-profile-edward-boyden-0522
Edward Boyden develops techniques to study the brain, and how it operates, in finer detail.Thu, 21 May 2015 23:59:59 -0400Helen Knight | MIT News correspondenthttp://newsoffice.mit.edu/2015/faculty-profile-edward-boyden-0522<p>How is the mind formed, and what does it mean to be human?</p>
<p>These are the questions that intrigue Edward Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT.</p>
<p>To answer them, we will need a much deeper understanding of how the brain works, according to Boyden, who leads the synthetic neurobiology research group at the MIT Media Laboratory and the McGovern Institute for Brain Research at MIT.</p>
<p>“If we understand how the circuitry of the brain computes things like thoughts and emotions, it could help us to know more about what it means to be human,” Boyden says.</p>
<p>To this end, he and his group are developing techniques to allow neuroscience researchers to study the brain, and how it operates, at a more fundamental level of detail. “Perhaps someday in the future, if we understand how the brain works, we would understand the nature of irrationality and strife, and other aspects of the human condition,” Boyden says. “It could help humanity gain for the better.”</p>
<p><strong>Optogenetics pioneer</strong></p>
<p>Boyden has pioneered the development of technologies such as <a href="https://www.youtube.com/watch?v=Nb07TLkJ3Ww&amp;feature=youtu.be">optogenetic</a> tools, in which light-sensitive proteins from algae and bacteria are added to neurons. This allows the neurons to be activated or silenced with pulses of visible light.</p>
<p>The tools are now widely used among researchers attempting to understand which specific kinds of neuron in the brain are responsible for different aspects of behavior and emotion, such as aggression.</p>
<p>Last year his team announced they had uncovered optogenetic molecules that allow neurons to be <a href="http://newsoffice.mit.edu/2014/optogenetic-toolkit-goes-multicolor-0209">controlled by red light</a>. “Red light is very useful because it can go very deep into the brain or body, so it can activate or silence neurons distributed throughout large regions,” Boyden says.</p>
<p>Working with researchers at the University of Vienna, his group has also recently developed a system that can generate <a href="http://newsoffice.mit.edu/2014/illuminating-neuron-activity-3-d-0518">three-dimensional movies</a> of the brains of small animals at very high speeds.</p>
<p>The system is based on a technology known as light-field imaging. This creates 3-D images by measuring the angles of light rays emitted by a sample. It can be used to image the activity of every neuron in the brain at once, giving researchers a much clearer picture of how different neurons work together to process and act on information.</p>
<p><strong>Novel approaches to imaging</strong></p>
<p>To help researchers take a closer look at the individual molecules, wirings, and connections in the brain, Boyden this year unveiled a new imaging technique based on the polymers used in disposable diapers. When a section of brain tissue is embedded in this dense polymer and water is added, the material swells, moving the biomolecules away from each other.</p>
<p>This technology, called “<a href="http://newsoffice.mit.edu/2015/enlarged-brain-samples-easier-to-image-0115">expansion microscopy</a>,” allows brain tissue to be studied in very fine detail using conventional microscopes, which are widely available and can operate at very high speeds, Boyden says.</p>
<p>Expanding the brain in this way also gives researchers the space to attach tags to different biomolecules. These tags could then be used to “read out” the chemical composition of the brain, he says — much as barcodes are added to packaging to mark and identify individual products.</p>
<p>But it is perhaps when all of these techniques are combined that they could offer the greatest insights into how the brain works. They may ultimately help us find new methods to treat the 1 billion people around the world who suffer from brain disorders such as epilepsy, Parkinson’s disease, and schizophrenia, Boyden says.</p>
<p>“If we can really map the brain with molecular precision, and then probe neurons in the map with optical readout and control tools,” he says, “we could hunt down the control knobs in the brain that will allow us to fix these brain disorders.”</p>
<p><strong>A “misfit” finds a home</strong></p>
<p>Boyden graduated from MIT in 1999 with dual bachelor’s degrees in physics and electrical engineering and computer science, as well as a master’s degree in electrical engineering and computer science. Eager to tackle one of research’s remaining “big unknowns” he developed an interest in the brain, and moved to Stanford University to undertake a PhD in neuroscience. While at Stanford, he worked in the laboratories of Richard Tsien and Jennifer Raymond, studying motor learning, and also began working on his optogenetics research as a side project.</p>
<p>In 2006 he rejoined MIT, establishing the neuroengineering group at the Media Lab to combine his understanding of neuroscience with his background in engineering.</p>
<p>Unlike some university departments and institutions Boyden approached, which&nbsp;did not believe the field of neuroengineering was ready to become a discipline, the MIT Media Lab was willing to take a chance on a “misfit,” he says. “I wanted to be at MIT because I love the open, free-spirited, collaborative place that it is. Also, they were open-minded enough to hire me,” he adds wryly.</p>
<p>But the field has gradually gained acceptance, as the neuroscience community has begun to appreciate how useful the new tools could be to their research.</p>
<p>Then in 2013, President Barack Obama launched the BRAIN Initiative, a national research effort designed to improve our understanding of the mind and diseases such as Alzheimer’s.</p>
<p>“I was one of four people from MIT who were invited to the White House for the announcement that neurotechnology should be a national priority, and the field has really exploded since then,” Boyden says. “So it is really gratifying for the Media Lab to bet on such a crazy idea, and to see it succeed as it has.”</p>
Edward BoydenResearch, School of Engineering, School of Science, Biological engineering, Brain and cognitive sciences, Media Lab, Faculty, Profile, Neuroscience, Nanoscience and nanotechnology, OptogeneticsSeeing genderhttp://newsoffice.mit.edu/2015/face-detector-neurons-primates-gender-0508
MIT neuroscientists pinpoint neurons that help primates tell faces apart. Fri, 08 May 2015 09:30:00 -0400Rob Matheson | MIT News Officehttp://newsoffice.mit.edu/2015/face-detector-neurons-primates-gender-0508<p>How do primates, including humans, tell faces apart? Scientists have long attributed this ability to so-called “face-detector” (FD) neurons, thought to be responsible for distinguishing faces, among other objects. But no direct evidence has supported this claim.</p>
<p>Now, using optogenetics,&nbsp;a technique that controls neural activity with light, MIT researchers have provided the first evidence that directly links FD neurons to face-discrimination in primates — specifically, differentiating between males and females.</p>
<p>Working with macaque monkeys trained to correctly identify images of male or female faces, the researchers used a light-sensitive protein to suppress subregions of FD neurons in the inferior temporal (IT) cortex, a visual information-processing region. In suppressing the neurons, the researchers observed a small yet significant impairment in the animals’ ability to properly identify genders.</p>
<p>“If these face-detector neurons are participating in face-discriminating behavior — in telling gender of faces apart — then, if we knock them down, the behavior should take a hit,” says Arash Afraz, a research scientist at MIT’s McGovern Institute for Brain Research and lead author of a paper describing the study in the <em>Proceedings of the National Academy of Sciences</em>.</p>
<p>This experiment, Afraz says, marks a step forward in understanding the links between specific neurons and primate behavior. “You actually have to perturb the activation of that neuron and see if you can affect behavior,” he says. “If that happens, it means these neurons are part of the causal chain for that particular behavior.”</p>
<p>By providing a closer look at primate object-recognition, Afraz adds, the study could also aid in developing visual prostheses that may require direct wiring with the IT cortex. More broadly, understanding the light level needed for optogenetic neural silencing could also aid in developing implantable treatments for patients with temporal lobe epilepsy. “We could have devices implanted in the cortex that automatically turn on when the epilepsy attack starts, and silence the cortex with light,” Afraz says.</p>
<p>Co-authors of the study are James DiCarlo, a professor of neuroscience and head of MIT’s Department of Brain and Cognitive Sciences, and Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences whose group developed the optogenetics tools used in the study.</p>
<p><strong>Knocking down neurons</strong></p>
<p>In the 1980s, scientists first hypothesized FD neurons, with studies that recorded spikes in neural activity in response to images of faces. “But we [never had] a clear mechanistic connection between the activation of these neurons and face discrimination, as opposed to face detection,” Afraz says.</p>
<p>For the <em>PNAS</em> paper, the MIT researchers trained two monkeys to identify images of gendered faces with about 90 percent accuracy. To do so, they displayed images of male and female faces with varying features slightly to the left or right of a middle fixation point of a screen. Then, they displayed two dots on the top and bottom of the screen; the monkeys looked at the top dot if the face was female, and at the bottom dot if it was male.</p>
<p>The researchers then measured neural activity in the IT cortex of the monkeys, locating a number of subregions where FD neurons were most and least concentrated. Next, they injected high- and low-FD subregions with a virally delivered protein engineered by Boyden’s group, called ArchT, which subdues neural activity in the presence of light.</p>
<p>After a month, the monkeys viewed 1,600 grayscale images of male and female faces, during 40 separate sessions, while the researchers delivered random pulses of green light to the ArchT-treated areas. Suppressing only 1 millimeter of high-FD subregions —&nbsp;not low-FD subregions — impaired the animals’ ability to correctly identify gendered faces by, on average, about 2 percent, the researchers found.</p>
<p>Linking tiny clusters of neurons with the perceptual ability to identify genders suggests those neurons are responsible for processing gendered faces, Afraz says. “Wherever a signal is encoded more explicitly in the brain, that part seems to contribute more to the behavior directly,” Afraz explains. “If we know the information of a face’s gender is encoded more explicitly in a small bit of cortex, knocking down that bit of cortex takes a bigger toll on behavior.”</p>
<p><strong>New avenue of discovery</strong></p>
<p>While his lab has researched visual processing for 20 years, DiCarlo notes that “this collaboration with Boyden — who develops cutting-edge tools — is what opened the door to this significant advance, and to an entire new avenue of discovery.”</p>
<p>In particular, as one of the first documented uses of optogenetics to induce behavioral changes in primates, the study also demonstrates the potential for using it to study vision and behaviors in primates, Boyden says. In contrast to traditional neural-suppression methods, for instance, optogenetics tools can zero in on tiny clusters of neurons for brief moments,&nbsp;which can better pinpoint specific neurons as drivers of behavior.</p>
<p>“We’re getting at the actual circuitry of the brain and the exact neurons that are involved in discriminating [between faces],” Boyden says. “These tools offer higher temporal and spatial resolution than any other neural perturbation method.”</p>
<p>William Newsome, a professor of neurobiology at Stanford University, says the study “addresses a fascinating problem in systems neuroscience … in a set of very challenging experiments” that utilize both optogenetics- and pharmaceutical-suppression techniques.</p>
<p>“This,” he adds, “is a powerful demonstration that face-detecting neurons mediate the perceptual ability to discriminate among faces — a very cool result.”</p>
<p>The study was funded by the National Institutes of Health.</p>
Optogenetics, Gender, Research, School of Science, Neuroscience, Brain and cognitive sciences, Media Lab, School of Architecture and Planning, Nanoscience and nanotechnologyChief Executive Leung Chun-ying of Hong Kong visits MIThttp://newsoffice.mit.edu/2015/chief-executive-leung-chun-ying-hong-kong-0506
University-driven innovation a key focus of Institute meeting.Wed, 06 May 2015 10:30:00 -0400Peter Dizikes | MIT News Officehttp://newsoffice.mit.edu/2015/chief-executive-leung-chun-ying-hong-kong-0506<p>Chief Executive Leung Chun-ying of Hong Kong visited MIT Tuesday morning, engaging in a lively dialogue with faculty and Institute leaders about university-driven innovation.</p>
<p>Leung toured portions of the MIT Media Laboratory, participated in a roundtable discussion with MIT professors about developing innovation ecosystems, and held a private meeting with Provost Martin Schmidt.</p>
<p>“Hong Kong is very much a cosmopolitan and international city,” Leung said, adding, “Therefore we look for collaborative activities.”</p>
<p>On the subject of directing academic research toward spurring innovation in the larger economy, Leung said he thinks that “this model is working” in Hong Kong, but that he would “very much like to work with our friends at MIT” on finding further ways of applying research toward production and growth.</p>
<p>The faculty who took part in a group discussion with Leung praised MIT’s existing partnerships with Hong Kong, noted some areas of development for the region, and emphasized the importance of analyzing the conditions in which innovation grows.</p>
<p>Fiona Murray, the Bill Porter Professor of Entrepreneurship, associate dean for innovation at the MIT Sloan School of Management, and co-director of the MIT Innovation Initiative, pointed out to Leung that MIT engages in technological development, and also studies the enterprise.</p>
<p>“We practice innovation on a very regular basis,” Murray said. “But we also engage in the science of innovation.”</p>
<p><strong>Finding the right balance in research</strong></p>
<p>Indeed, as the MIT faculty engaging with Leung observed, finding the right combination of pure research and applied activities remains a key to technology-based growth.</p>
<p>Media Lab Director Joi Ito, who guided Leung at the outset of his visit and participated in the roundtable, observed that many of the lab’s corporate partners give its researchers a wide berth to pursue their work. On the other hand, Ito noted during the roundtable discussion, the goal of many researchers now is to have a rapid, real-world impact through their work.</p>
<p>As Ito stated, the Media Lab has updated its well-known motto about producing prototypes — “demo or die” — to “deploy or die,” representing the goal of placing finished products in the world.</p>
<p>Charles Sodini, the Clarence J. LeBel Professor in Electrical Engineering, said that “research results sow the seeds of innovation.” However, as he added, there is “a fine balance of where you want to be” in terms of balancing pure and applied research activities.</p>
<p>Vincent Chan, the Joan and Irwin M. Jacobs Professor in MIT’s Department of Electrical Engineering and Computer Science, and a native of Hong Kong, suggested that Hong Kong has significant growth opportunities in computer hardware and in financial technologies, from security to data management and data analysis. Compared with some neighbors in the region, Chan said, technologists may be “more creative in Hong Kong,” and could flourish in significant high-tech sectors.</p>
<p>Alex “Sandy” Pentland, the Toshiba Professor of Media Arts and Sciences, suggested that Hong Kong might have an interesting role to play in guiding the U.S. and China toward new forms of agreement on international data security and privacy, though he added, “I think that only works if everybody is a stakeholder.”</p>
<p>Hong Kong has been a Chinese administrative region since 1997, when it was handed over from British control. The area enjoys considerable autonomy from China, and is a leading center of global finance. It is also, as Leung noted, heavily oriented around the service economy, leaving room for technology-based growth.</p>
<p><strong>Modeling Hong Kong, block by block?</strong></p>
<p>Earlier in his visit, Leung was shown three labs in the Media Lab. In the Tangible Media group, director Hiroshi Ishii, the Jerome B. Wiesner Professor, showed Leung numerous new kinds of tangible interfaces for digital information. In the Changing Places lab, director Kent Larson helped show Leung a variety of innovations, from Lego-based urban models to prototypes of autonomous vehicles.</p>
<p>Leung showed flashes of humor while engaging with MIT researchers about the potential uses of these innovations.&nbsp;</p>
<p>While examining a Lego-based city model that provides real-time estimates of factors such as energy consumption and walkability, Leung joked that in Hong Kong, people would also want to know the “shopability” ratings of their locations.</p>
<p>Viewing a half-size prototype of a very compact urban automobile, Leung quipped, “Maybe you can make large, medium, and small” sizes of the vehicles for people.&nbsp;</p>
<p>MIT and Hong Kong have developed growing ties in recent years, a relationship that is set to expand following a $118 million gift this January from Samuel Tak Lee ’62 SM ’64, a real estate developer whose work has significantly focused on Hong Kong, among other global cities. The gift will promote social responsibility among entrepreneurs and academics in real estate, among other things.</p>
<p>Summing up the visit, Leung said it was a “great pleasure and privilege to be at MIT,” pledging again his interest in identifying “subjects for collaboration” in the future.</p>
MIT Provost Martin A. Schmidt (left) greets Hong Kong Chief Executive Leung Chun-yingMedia Lab, Sloan School of Management, Global, Asia, Provost, Special events and guest speakers, Innovation and Entrepreneurship (I&E), Innovation InitiativeConnecting the masseshttp://newsoffice.mit.edu/2015/startup-jana-free-mobile-data-0505
Startup’s advertising platform provides free mobile data to prepaid cellphone users in developing countries.Tue, 05 May 2015 00:00:01 -0400Rob Matheson | MIT News Officehttp://newsoffice.mit.edu/2015/startup-jana-free-mobile-data-0505<p>In an unassuming office building in downtown Boston, a midsized tech startup founded by an MIT alumnus has a reach of more than half of the world’s population.</p>
<p>Partnering with major mobile operators, Jana Mobile, founded by Nathan Eagle PhD ’06, is positioned to make the Internet free for nearly 3.5 billion prepaid mobile users worldwide.</p>
<p>With Jana’s ubiquity comes significant social impact: In developing countries across Africa, Asia, and Latin America where Jana operates, people earn low wages, yet are increasingly buying prepaid mobile phones for communication. According to Jana, about 10 percent of these people’s daily wages pays for mobile connectivity, with roughly three hours of work equaling the cost of one hour of data.</p>
<p>Through Jana’s app, mCent, prepaid users can download apps, watch ads, or participate in other marketing services. In return, they receive credits of 10 cents or more that can be used to buy more data. This credit offsets the cost to download and run an app for up to several weeks. In the last year, mCent users generated nearly 1.5 billion megabytes of free data. &nbsp;</p>
<p>“Ultimately, this is the world’s largest compensation platform,” says Eagle, whose mobile-programming research at the MIT Media Laboratory set the stage for Jana.</p>
<p>The company is currently integrated with 237 mobile operators active in 102 countries. Jana’s clients are major app developers — such as Twitter, Amazon, and Google — that pay Jana to gain long-term users in emerging markets, which were previously difficult to penetrate with users’ restricted data plans. According to Jana, mCent members use apps three to 14 times more than users acquired through other channels.</p>
<p>Although launched in 2009, Jana exploded in popularity with last June’s launch of mCent, which has more than 25 million registered users across Asia, Africa, and Latin America. It’s now one of the most popular apps of all time in India, positioning Jana as one of the largest mobile advertising platforms there, Eagle says. “We finally decided to put an Android wrapper, a consumer-facing front-end on this robust infrastructure,” he says. “From there, it took off.”</p>
<p><strong>Going mobile</strong></p>
<p>Eagle’s work in mobile programming began at MIT in the early 2000s, when he joined Alexander “Sandy” Pentland’s Wearable Computer Group at a time when students were mounting computers to their backs or on helmets. During this time, Nokia launched a programmable phone that attracted the worldwide attention of programmers, including Eagle.</p>
<p>In one of the earliest cases of reality mining, Eagle and Pentland used the phones as wearable sensors to passively collect data on 100 MIT students over the course of nine months. Based on that data, they were able to track and predict the students’ behavior. &nbsp;</p>
<p>Eagle’s novel data analytics caught the eye of mobile operators in emerging markets — specifically Africa — that were inundated with petabytes of data from a booming mobile-user base, but had limited analytics resources. Eagle then relocated to a rural Kenyan village, Kilifi, to work on call data analytics, which gave him access to the back-end billing systems of dozens of operators.&nbsp;</p>
<p>While working out of an office at a local hospital, however, Eagle became troubled by the constant shortages in the hospital’s blood supply: Over two months, he was approached four separate times to donate blood for emergency transfusions.</p>
<p>Researching blood supply across Kenya, Eagle found that the issue was a lack of data: Keeping blood supplies stocked involved one person traveling between hospitals, checking current levels, and replenishing when needed —&nbsp;resulting in a lag of about four weeks.</p>
<p>With students at the University of Nairobi, Eagle developed an app that allowed nurses to text their hospitals’ daily blood-supply levels, which were tracked in real time at the centralized blood bank.</p>
<p>The app quickly became a huge success across Kenya, registering hundreds of nurses. But after one week, half the nurses stopped texting. By the month’s end, no nurses were using the platform. The issue was the cost of data.</p>
<p>“The price of a single text message represents a substantial fraction of a nurse’s day’s wage. By asking them to participate every day, we were asking them to take a pay cut,” Eagle says.</p>
<p>Eagle then wrote code that compensated the nurses for each accurate text sent about the day’s blood-supply levels. “We’d credit that nurse about 10 Kenyan shillings to cover the cost of the text, and a penny or two to say thank you,” Eagle says. With that incentive, nurses once again jumped on board, resurrecting the app, which is still being used across Kenya.&nbsp; &nbsp;&nbsp;</p>
<p><strong>Flipping the switch</strong></p>
<p>This compensation app, however, soon “flipped a switch” for mobile operators, Eagle says, that had seen its scalability: Instead of squeezing more money out of nurses, why not ask the Kenyan government, for instance, to buy the airtime for the project?</p>
<p>“Then, why not ask Procter and Gamble, or Unilever?” Eagle says. “These operators got me to start thinking about this as a commercial entity instead of an academic project.”</p>
<p>In 2009, Eagle co-founded Jana under the original moniker “txteagle.” By the following year, Eagle had partnered with 100 mobile operators worldwide, gained access to 2 billion people, and relocated to Boston, with several more employees. Since 2011, the startup has raised $40 million in funding.</p>
<p>Early challenges came from gaining funding and support for the startup’s business model, Eagle says. “Convincing people that we can be a commercially viable enterprise with our core mission of building technology that enables us to make the Internet free in the developing world has been tricky,” he says.</p>
<p>But credibility increased with profitability, Eagle says. Now Jana aims to have 1 billion mCent users, he adds, in hopes of cutting costs of connectivity for all users to around 5 percent of daily wages, from 10 percent.</p>
<p>“You don’t get many opportunities in life to give a billion people a 5 percent raise,” Eagle says. “I think we have a shot at realizing that goal in the next two to three years, which is extraordinary.”</p>
Africa, Asia, Latin America, Developing countries, Development, School of Architecture and Planning, Startups, Media Lab, Innovation and Entrepreneurship (I&E), iPhone, Android, smartphones, Alumni/aeJapanese Prime Minister Shinzo Abe visits MIThttp://newsoffice.mit.edu/2015/japanese-prime-minister-shinzo-abe-0427
Discussion of innovation keynotes stop; Japan gives gift to Institute.Mon, 27 Apr 2015 17:45:00 -0400Peter Dizikes | MIT News Officehttp://newsoffice.mit.edu/2015/japanese-prime-minister-shinzo-abe-0427<p>Prime Minister Shinzo Abe of Japan visited MIT Monday morning as part of his weeklong trip to the U.S., participating in a roundtable discussion of innovation strategies during his stop at the Institute.</p>
<p>Abe called MIT a “center of innovation in the world” and said he was “very impressed and grateful” for the remarks on innovation at the meeting with MIT faculty in fields ranging from bioscience to management and political science.</p>
<p>Abe added that encouraging a “virtuous circle” of innovation, including academia, was “one of the pillars of my growth strategy,” and emphasized his commitment to seeing women have an equal role in innovation and entrepreneurship. Japan will “double our efforts so that female leaders have a better chance,” Abe said, asserting that he wants to “create a society where women can shine.”</p>
<p>Abe also toured three research labs in the MIT Media Lab, and met with MIT President L. Rafael Reif, who in welcoming remarks noted the extensive ties between MIT and Japan.</p>
<p>“Japan is a country MIT has studied and admired for many years,” Reif said, noting that 39 current courses at the Institute focus on Japan. Moreover, Reif observed, over 1,000 undergraduates have worked and studied in Japan as part of the MIT International Science and Technology Initiatives (MISTI) program, which places students in internships; today, more than 1,600 MIT alumni also live in Japan.</p>
<p>In conjunction with Abe’s visit, the government of Japan announced a new gift to MIT.</p>
<p>The gift takes the form of a fund that will initially support research in Japanese politics and diplomacy at MIT’s Center for International Studies (CIS) and, in its second phase, the creation of a new chaired professorship, to be titled the Professor of Modern and Contemporary Japanese Politics and Diplomacy, within MIT’s Department of Political Science.</p>
<p>As the title of the chair suggests, the new position will focus on current-day issues in Japanese politics and international relations, building on MIT’s existing strengths in those areas. The gift will take effect ahead of the start of the 2015-16 academic year. &nbsp;</p>
<p><strong>Innovation from many perspectives</strong></p>
<p>Abe began his visit by talking with researchers in the Media Lab, guided by Media Lab Director Joi Ito. Abe listened to research presentations by Neri Oxman, the Sony Corporation Career Development Associate Professor of Media Arts and Sciences and director of the Mediated Matter group, along with Chikara Inamura and John Klein, graduate students in the group; Hugh Herr, associate professor of media arts and sciences and head of the Biomechatronics group; and graduate student Philipp Schoessler, who works with Hiroshi Ishii, the Jerome B. Wiesner Professor and director of the Tangible Media group.</p>
<p>During the roundtable discussion on innovation, held on the sixth floor of the Media Lab, faculty members took turns making presentations before Abe responded to the group.</p>
<p>Political scientist Richard Samuels, the Ford International Professor and director of CIS, noted that MIT and Japan have historical ties dating to the 1870s, soon after Japan opened to the West; he added that Japan’s “spirit of innovation and improvement has never flagged.” Still, Samuels suggested, while Japan was viewed in the 1980s as the “model for how to do technology right,” today’s innovation landscape is more open-ended and depends on access to capital, a university-based research and spinoff culture, and more.</p>
<p>Other faculty discussed what they regard as especially crucial elements of an innovation ecosystem. Neuroscientist Susumu Tonegawa, the Picower Professor at MIT, recommended alterations in the rules of Japanese universities to allow professors more time pursuing off-campus research. He added that it is “important for the Japanese government to continue to support fundamental research.”</p>
<p>Robert Langer, the David H. Koch Institute Professor at MIT, also emphasized the significance of academic research in innovation. He cited Cambridge’s biotechnology sector as an example, saying growth “will follow [if we] fund universities to do the best research, and train the best students in the world.”</p>
<p>Some of the MIT faculty present study innovation, and offered remarks on that topic. Suzanne Berger, the Raphael Dorman and Helen Starbuck Professor of International Relations at MIT, recommended that Japan pursue growth opportunities in advanced manufacturing and connected fields. “There is such a tight connection between innovation and production,” Berger said.</p>
<p>Fiona Murray, the Bill Porter Professor of Entrepreneurship, associate dean for innovation in the MIT Sloan School of Management, and co-director of the MIT Innovation Initiative, added to Abe’s remarks about gender and entrepreneurship, noting that “male and female graduates are equally interested in taking this path.” She also observed that sound policies spark innovation, saying there are “important ways that governments have brought stakeholders together to effect change.”</p>
<p>Kenneth Oye, an MIT political scientist with expertise in both Japan and in technological regulation, suggested that forward-looking attempts to gauge the risks of new technologies, particularly in biomedical research, were in the “common interest” of the U.S. and Japan. In turn, he said, “realizing the consequences” of evolving technologies would make it easier for widely useful new tools to get adopted.&nbsp;</p>
<p>For his part, Ito suggested that new tools and techniques have made sophisticated, interdisciplinary innovation more plausible at smaller scales, and suggested that it is important for funders to give researchers room to make discoveries — since many of them are tangential to the original aims of a research project.</p>
<p>“Japan really can build the same kind of economy we see here in Massachusetts,” Ito suggested.</p>
<p><strong>Abe: Japan backs a “similar model” </strong></p>
<p>In response to these comments from the discussants, Abe said the interdisciplinary nature of the projects on display in the Media Lab created “quite an insightful moment” for him.</p>
<p>“We would like to see further fruition of a similar model,” Abe added, referring to Japanese efforts to build an innovation ecosystem in the city of Okinawa, drawing upon contributions from academia, industry, and government.</p>
<p>Abe also praised Samuels, noting that the political scientist’s scholarship had been “very effective” in “deepening relations between Japan and United States.”</p>
<p>Abe’s trip to the U.S. coincides with the 70th anniversary of the end of World War II, in which Japan and the U.S. were adversaries. Abe will meet at the White House with President Barack Obama, and on Wednesday will deliver the first address by a Japanese leader to a joint session of Congress.</p>
<p>The Japanese prime minister will also discuss global economic and diplomatic issues while in Washington, including the Trans-Pacific Partnership, a potential trade agreement that has drawn some domestic opposition in both the U.S. and Japan. Abe will make stops in Los Angeles and San Francisco later in the week before returning to Japan.&nbsp;</p>
Japanese Prime Minister Shinzo Abe (left) with MIT President L. Rafael ReifPresident L. Rafael Reif, SHASS, Sloan School of Management, Political science, Japan, Entrepreneurship, Global, Special events and guest speakers, International relations, Media Lab, Innovation and Entrepreneurship (I&E)Visualizing the abstract: A life in computer sciencehttp://newsoffice.mit.edu/2015/student-profile-walter-menendez-0424
Senior Walter Menendez applies dynamic visuals to conceptual datasets. Thu, 23 Apr 2015 23:59:59 -0400Julia Sklar | MIT News correspondenthttp://newsoffice.mit.edu/2015/student-profile-walter-menendez-0424<p>Programming has fascinated senior computer science and engineering major Walter Menendez since he was 10, and he first got his hands on the Mega Man franchise of video games. One game, in particular, features sentient avatars that could traverse a physical representation of the Internet, a virtual world in which “life is the same as technology,” Menendez recalls. Since that early exposure to augmented reality, it’s been a field he’s chased — and one that he finally got his hands on as an undergraduate at MIT.</p>
<p>Before he could get to the good stuff, though, Menendez had to make the rounds through basic programming and Web development. During his freshman year, he worked alongside then-graduate student Anthony DeVincenzi on a project to make data more social and accessible. As part of the Tangible Media Group in the MIT Media Laboratory, Menendez helped build a Web app where people could upload geospatial information — such as the location and impact of an earthquake, drought, or wildfire — onto an interactive map.</p>
<p>Staying true to the group’s name, there is a tangible aspect to the app: Users can hold an iPad up to a globe and project the data directly onto it; spin the globe, and the data will replot itself accordingly. For Menendez, a self-described visual learner, the experience set in stone that he was headed in the right academic direction.</p>
<p>It was also gave him his first experience solving a major technical problem that came up on the job. The team’s biggest issue was aggregating data: If they loaded every available data point onto a map, that load would likely crash the browser. Menendez recalls standing in front of a whiteboard with DeVincenzi one day, scribbling out equations and graphs to figure out a way to compress the data into averaged groups, rather than single points, that would change size depending on whether a user zoomed in or out.</p>
<p>In keeping with his visual roots, during his sophomore year Menendez moved on to a different group in the Media Lab, working on a project called Vision Blocks that involved making computer vision algorithms accessible to the general consumer. Vision Blocks hinges on using programming blocks of abstract code that can be dragged and dropped by a user to allow something like a webcam to monitor and react to changes in the external environment. The webcam could be programmed to send an email every time a person walked in front of it, for example.</p>
<p>But a trend in Menendez’s undergraduate experience has been what he calls “project-hopping”: Almost every time he joined a new lab, the project he was on would end, either because the graduate student he was working with would finish or the project would switch labs. If he were to do MIT over again, he says he would try harder to find a lab to really settle down in.</p>
<p>As a junior, he was finally able to work on one project for an entire year — and one that involved augmented realities, his first true love in computer science. LuminAR is what Menendez calls “a combination of a projector, a camera, and [an Xbox] Kinect sensor” that formed a touch-based interface that could respond to outside stimuli. LuminAR can convert any flat surface into a computer, so long as the surface can be projected onto, and Menendez was charged with bringing this technology into the realm of 3-D gamelike experiences.&nbsp;</p>
<p>Using browser graphics, he was able to render a virtual light source that can respond to real objects placed on its surface, generating a realistic shadow. He’s quick to whip out his phone and share a video of the process in action: It appears that a light hanging over a table is casting a shadow under Menendez’s hand, a shadow that moves when his hand moves. In reality, however, the light is coming from within the projected image, and adjusting in real time to his movements.</p>
<p>“I was literally fulfilling my childhood dream with this project,” he says.</p>
<p>For Menendez, a first-generation college student, his MIT experience has been somewhat surreal. He’s used photography, a hobby that’s in keeping with his visual inclinations, to balance the intensity of academics.</p>
<p>Data visualization, a departure from his original interests, has become his new niche; an experience the summer following his sophomore year was initially what got him hooked. He landed a gig at Tumblr in New York, where he embarked on a data science internship. He helped build an algorithm that the website could use to more precisely predict and calculate trending topics. As is, trends are calculated by counting how many times a certain tag is used within a 24-hour period. But by that metric, tags that go from appearing once or twice to several hundred times are considered trending, and on a website with a few million new posts a day, that didn’t seem right to Menendez.</p>
<p>He reconfigured a trend-calculating algorithm based on search traffic — counting what users are looking for before it has necessarily even been generated on the website.<br />
<br />
“It’s kind of a chicken-and-egg situation,” Menendez says. “Content that people are looking for isn’t being generated as quickly as it’s being searched for.”</p>
<p>Calculating trending topics in this way would allow Tumblr to know what’s trending much more quickly than relying on tag counts.</p>
<p>“Rarely do undergrads get exposed to the experience of working in industry, as opposed to lab research,” he says. “For me that experience was what really got me excited about data science and data engineering as a field."</p>
<p>Next year, Menendez will return to New York, working as a data engineer for Buzzfeed. His job will center around building Web-based infrastructure and algorithms to assess how people use the site — in particular, analyzing which visual aspects succeed, based on metrics like their color, content, and motion.</p>
<p>“It’s data, it’s photography, it’s visualization. It’s all there,” Menendez says of his forthcoming job. “Things are looking like they’re going in the right direction, and I didn’t think that would happen any time soon.”&nbsp;&nbsp;</p>
Walter MenendezElectrical Engineering & Computer Science (eecs), Students, Undergraduate, Profile, Media Lab, School of Engineering, School of Architecture + PlanningMIT Center for Art, Science and Technology receives $1.5 million grant from the Andrew W. Mellon Foundationhttp://newsoffice.mit.edu/2015/center-art-science-technology-receives-andrew-mellon-foundation-grant-0448
Combined grants provide eight years of funding, among largest gifts received by the arts at MIT.Wed, 22 Apr 2015 17:33:01 -0400Center for Art | Science and Technologyhttp://newsoffice.mit.edu/2015/center-art-science-technology-receives-andrew-mellon-foundation-grant-0448<p>The MIT Center for Art, Science and Technology (CAST) has received $1.5 million from the Andrew W. Mellon Foundation, in support of the center’s role as a catalyst for multidisciplinary creative experimentation and integration of the arts across all areas of MIT. The recent grant brings the Mellon Foundation’s total support for CAST to $3 million, among the largest gifts received by the Arts at MIT.</p>
<p>Philip S. Khoury, associate provost, expressed gratitude for the Mellon Foundation’s ongoing support and recognition: “The Mellon Foundation has an unparalleled role in funding pioneering programs in the arts and humanities, and this gift is a wonderful affirmation of the role of the Arts at MIT, CAST’s mission, and of the Institute’s distinctive arts heritage. We are enormously grateful for the Mellon’s ongoing support, which will enable us to expand the vital work of the Center.”</p>
<p>CAST promotes research, teaching, and programming at the intersections of art, science, and engineering. The center brings outstanding artists to campus as co-creators with faculty and students and sponsors a biennial, international symposium focused upon interrelated, mutually informing modes of exploration, knowledge, and discovery in various domains of the arts and sciences. Since its inception in 2012, CAST has provided grants for more than 20 artist residencies and collaborative projects with MIT faculty and students, 12 cross-disciplinary courses and workshops, two concert series, and numerous multimedia projects, lectures, and symposia. &nbsp;</p>
<p>CAST builds upon MIT’s 50 years of imaginative, forward-thinking approaches to integrating the arts into a research institution renowned for science and engineering. György Kepes established the Center for Advanced Visual Studies (CAVS) in 1967, a highly influential program that first brought artists, scientists, and engineers together in a research environment. CAVS pioneered the practice of “art on a civic scale,” which is today a cornerstone of the Program in Art, Culture and Technology (ACT) in the School of Architecture and Planning. Jerome B. Wiesner, the 13th president of MIT, fostered a multimedia program in the arts at MIT, firmly grounded in teaching and research at the Institute. He established the Council for the Arts at MIT in 1974, one of the first organizations of its kind in a U.S. university, and his support led to the establishment of the List Visual Arts Center and the MIT Media Laboratory. &nbsp;</p>
<p>Evan Ziporyn, the Kenan Sahin Distinguished Professor of Music and faculty director of the center, says, “CAST has taken the creative culture of the Institute as a whole and provided it with a solid and supportive framework, bolstering the implicit connection between the rigors of artistic practice and those of the laboratory and design studio.” &nbsp;</p>
<p><strong>CAST programs</strong></p>
<p>MIT has long been home to a broad range of artistic activities and is particularly known for its strong music and performing arts programs. Its faculty include Pulitzer Prize winners in music and fiction, and world-class performers and directors. Nearly half of all science and engineering undergraduate students enroll in music and theater classes each academic year. The campus is a hub of contemporary musical experimentation and innovation, where new work is commissioned, composed, and presented throughout the year.</p>
<p>MIT Sounding, a CAST performance series launched this year, includes new work commissioned by MIT and has already reinvigorated Boston’s musical landscape. Described as “welcome and ambitious” by <em>The Boston Globe</em>’s Jeremy Eichler, MIT Sounding included performances this season by legendary composer Alvin Lucier, Grammy-winners Roomful of Teeth, sound artist Arnold Dreyblatt, as well as several new works commissioned by MIT. The season concluded with a celebratory concert in honor of composer Terry Riley<strong> </strong>on April 18, which featured performances by Riley himself and an international cast of supporters, including 20 saxophonists in the world premiere of a re-imagined version of Riley's classic "Poppy Nogood."</p>
<p>The 2015-16 season will preview a new opera from composer Keeril Makan and director Jay Scheib, based on Ingmar Berman’s film "Persona," as well as performances by Maya Beiser, the Flux Quartet, and Pamela Z.</p>
<p>In addition to presenting new work, CAST’s Visiting Artists Program brings contemporary artists to campus to engage with groundbreaking research in the sciences and engineering, which frequently results in the development of new media and technologies for artistic expression. According to CAST executive director Leila Kinney, “The program is distinctive for its emphasis on the research and development phase of artistic work and the openness of scientists and engineers to artists’ speculative but hands-on way of thinking.”</p>
<p><strong>Collaborative research into materials and media</strong></p>
<p>Tomás Saraceno, visiting artist at CAST since 2012, has for some years investigated the complex morphology of spider webs and certain species’ ability to remain airborne to traverse great distances, which in turn has inspired large-scale installations, presented as speculative models for alternative ways of living. Fascinated by cosmologists’ reference to “black widow” pulsars in describing the behavior of certain stars in the wake of the big bang, Saraceno used a laser to scan a black widow spiderweb and has created three-dimensional installations from the data — for example, "14 Billions (Working Title),"<em> </em>2010. In addition, he has used spiders as the “co-creators” of luminous silk sculptures, whose complex geometries are woven by multiple species in the controlled environment of his studio. Saraceno’s work has dovetailed with the<strong> </strong>research of MIT professor and Department of Civil and Environmental Engineering head Markus Buehler, who studies the molecular structure of proteins in spider silk and the forces in spider webs in search of synthetic, bio-inspired building materials (see “<a href="http://arts.mit.edu/events-visit/cast-symposium/#pre-conference-events" target="_blank">Reverberations: Spiders and Musical Webs</a>”). Saraceno will discuss the vision, arachnology, structural ingenuity, and collaborative research that inform the installations created with spider webs now <a href="http://www.tanyabonakdargallery.com/exhibitions/tomas-saraceno_4/selected" target="_blank">on exhibit in New York</a> at MIT’s “<a href="http://activemattersummit.com" target="_blank">Active Matter Summit</a>” April 24-25. &nbsp;&nbsp;</p>
<p>For a recent <a href="http://artforum.com/picks/id=50694" target="_blank">exhibition</a> at The Kitchen in New York City, artist Anicka Yi worked with Tal Danino, a postdoctoral fellow at the MIT Laboratory for Multiscale Regenerative Technologies, to develop strange and pungent works cultured into a “collective bacterium.” Yi and Danino, a recent TED Fellow, created a scent from biological samples of 100 women to explore ideas related to paranoia surrounding contagion and hygiene, fear of feminism, and the power of female networks. “[This residency] really opened the gates for me,” Yi said in an <a href="http://www.artspace.com/magazine/interviews_features/scent-of-100-women-anicka-yis-viral-feminism" target="_blank">interview in </a><em><a href="http://www.artspace.com/magazine/interviews_features/scent-of-100-women-anicka-yis-viral-feminism" target="_blank">ArtSpace</a>,</em> “to have the opportunity to develop relationships with some of the top researchers and scientists. And then the biology started to become more tangible.”<strong> </strong>Yi returns to MIT for <a href="http://listart.mit.edu/exhibitions/anicka-yi-6070430k-digital-spit" target="_blank">an exhibition</a> at the List Visual Arts Center opening May 22. &nbsp;</p>
<p>Known for her Emmy Award-winning multimedia project, "Highrise," about life in residential skyscrapers throughout the world, documentarian Katerina Cizek recently completed a two-year residency sponsored by CAST at the MIT Open Documentary Lab. The latest phase of the project, "Highrise: Digital Citizenship," examines how new technologies and modes of communication shape residents’ personal and political lives. She will premiere the final chapter of the acclaimed digital documentary project as a multimedia participatory event, "<a href="http://boxoffice.hotdocs.ca/WebSales/pages/info.aspx?evtinfo=39318~446634ba-e848-4237-9b3c-72aceddb5263&amp;epguid=b314c44a-eed5-4434-9c2c-cc86c0bf61ee&amp;" target="_blank">Highrise: Universe Within, Live</a>," at HotDocs, the Canadian International Documentary Festival, and will launch this pioneering project as a Web documentary later this year. “I am so excited for this opportunity and have so much gratitude to MIT that it exists,” <a href="http://www.youtube.com/watch?v=owL2hQGsS7E&amp;list=PLpj5xev3Zm5xyiXRRQ0MyDkggGQWesOdP&amp;index=7" target="_blank">Cizek said</a>.</p>
<p>CAST supports cross-disciplinary curricular initiatives that integrate the arts into the core curriculum and create new opportunities for students to work between and across various fields of inquiry, such as 4.110J / MAS.330J / MAS.650J (Design Across Scales and Disciplines), taught by J. Meejin Yoon, professor and head of the Department of Architeture, and Neri Oxman, the Sony Corporation Career Development Professor of Media Arts and Sciences. &nbsp;</p>
<p>This semester, Skylar Tibbits, research scientist and director of the Self-Assembly Lab, is teaching a new CAST-sponsored studio that fuses materials science, art, and design with the emergent field of self-generating and programmable materials. Using the assembly language Processing, students created “generative” drawings, based on the structural properties of various material precedents, then exploited those principles by experimenting with physical embodiments of them in materials as divergent as glass, foam, and candy. In a final exercise, students will work in teams to create large-scale installations based on their discoveries, developing design strategies informed by these new technologies for fabrication.</p>
<p>A two-day symposium held in conjunction with this studio will bring together experts from art, design, materials science, engineering, and industry. The “Active Matter Summit” will showcase and help define the emerging field of programmable materials and build a strong community that can collectively explore challenges, applications, and future scenarios in this domain. As Tibbits <a href="http://www.wired.com/2014/11/skylar-tibbits/" target="_blank">has observed</a>, “We can listen to materials and use them as a programmable material. We can program biology.” &nbsp;</p>
<p>Neri Oxman is the MIT Media Lab representative in the Active Matter Summit who works at the intersections of computational design, additive manufacturing, and synthetic biology. “We live in a special time in history — a rare time,” says Oxman. She notes that the confluence of these fields is giving designers access to new tools and, along with them, new ways of seeing and making. At the recent TED Conference in Vancouver, Oxman revealed a first-of-its-kind photosynthetic wearable 3-D printed with fluidic channels infused with living matter. <a href="http://blog.ted.com/creative-ignition-a-recap-of-the-fiery-talks-in-session-10-of-ted2015/" target="_blank">She suggests</a>: “Think of it not as evolution by natural selection but evolution by design.”</p>
<p>CAST’s 2016 symposium, “Material Worlds” will explore these developments in materials systems and design — including self assembly, programmable matter, transformable structures, design with biological materials — and what cultural historians call a “new materialism.” Each biennale symposium will build upon an area of expertise and significant research at MIT, by introducing insights from leading artists and humanities scholars into the dialogue, as in the successful 2014 symposium “Seeing / Sounding / Sensing,” which convened artists, scientists and humanists for a dialogue about the arts and cognitive sciences.</p>
<p>At MIT, artists have embraced the challenge of inventing new methods, media, and technologies for artistic production alongside the goal of creating the most expressive artifacts, performances, and buildings. Operating at the vanguard of research into new artistic materials and methods, CAST is poised to advance the Institute’s leadership in innovation. These explorations in turn will determine the artistic and performative languages of the 21st century.</p>
Arts, Grants, MIT Center for Art, Science & Technology (CAST), Education, teaching, academics, SHASS, School of Architecture + Planning, Media LabThumbnail track padhttp://newsoffice.mit.edu/2015/wearable-thumbnail-sensor-controls-digital-devices-0417
Unobtrusive wearable sensor could operate digital devices or augment other device interfaces.Fri, 17 Apr 2015 00:00:00 -0400Larry Hardesty | MIT News Officehttp://newsoffice.mit.edu/2015/wearable-thumbnail-sensor-controls-digital-devices-0417<p>Researchers at the MIT Media Laboratory are developing a new wearable device that turns the user’s thumbnail into a miniature wireless track pad.</p>
<p>They envision that the technology could let users control wireless devices when their hands are full — answering the phone while cooking, for instance. It could also augment other interfaces, allowing someone texting on a cellphone, say, to toggle between symbol sets without interrupting his or her typing. Finally, it could enable subtle communication in circumstances that require it, such as sending a quick text to a child while attending an important meeting.</p>
<p>The researchers describe a prototype of the device, called NailO, in a paper they’re presenting next week at the Association for Computing Machinery’s Computer-Human Interaction conference in Seoul, South Korea.</p>
<p>According to Cindy Hsin-Liu Kao, an MIT graduate student in media arts and sciences and one of the new paper’s lead authors, the device was inspired by the colorful stickers that some women apply to their nails. “It’s a cosmetic product, popular in Asian countries,” says Kao, who is Taiwanese. “When I came here, I was looking for them, but I couldn’t find them, so I’d have my family mail them to me.”</p>
<p>Indeed, the researchers envision that a commercial version of their device would have a detachable membrane on its surface, so that users could coordinate surface patterns with their outfits. To that end, they used capacitive sensing — the same kind of sensing the iPhone’s touch screen relies on — to register touch, since it can tolerate a thin, nonactive layer between the user’s finger and the underlying sensors.</p>
<div class="cms-placeholder-content-video"></div>
<p><strong>Instant access</strong></p>
<p>As the site for a wearable input device, however, the thumbnail has other advantages: It’s a hard surface with no nerve endings, so a device affixed to it wouldn’t impair movement or cause discomfort. And it’s easily accessed by the other fingers — even when the user is holding something in his or her hand.</p>
<p>“It’s very unobtrusive,” Kao explains. “When I put this on, it becomes part of my body. I have the power to take it off, so it still gives you control over it. But it allows this very close connection to your body.”</p>
<p>To build their prototype, the researchers needed to find a way to pack capacitive sensors, a battery, and three separate chips — a microcontroller, a Bluetooth radio chip, and a capacitive-sensing chip — into a space no larger than a thumbnail. “The hardest part was probably the antenna design,” says Artem Dementyev, a graduate student in media arts and sciences and the paper’s other lead author. “You have to put the antenna far enough away from the chips so that it doesn’t interfere with them.”</p>
<p>Kao and Dementyev are joined on the paper by their advisors, principal research scientist Chris Schmandt and Joe Paradiso, an associate professor of media arts and sciences. Dementyev and Paradiso focused on the circuit design, while Kao and Schmandt concentrated on the software that interprets the signal from the capacitive sensors, filters out the noise, and translates it into movements on screen.</p>
<p>For their initial prototype, the researchers built their sensors by printing copper electrodes on sheets of flexible polyester, which allowed them to experiment with a range of different electrode layouts. But in ongoing experiments, they’re using off-the-shelf sheets of electrodes like those found in some track pads.</p>
<p><strong>Slimming down</strong></p>
<p>They’ve also been in discussion with battery manufacturers — traveling to China to meet with several of them — and have identified a technology that they think could yield a battery that fits in the space of a thumbnail, but is only half a millimeter thick. A special-purpose chip that combines the functions of the microcontroller, radio, and capacitive sensor would further save space.</p>
<p>At such small scales, however, energy efficiency is at a premium, so the device would have to be deactivated when not actually in use. In the new paper, the researchers also report the results of a usability study that compared different techniques for turning it off and on. They found that requiring surface contact with the operator’s finger for just two or three seconds was enough to guard against inadvertent activation and deactivation.</p>
<p>“Keyboards and mice — still — are not going away anytime soon,” says Steve Hodges, who leads the Sensors and Devices group at Microsoft Research in Cambridge, England. “But more and more that’s being complemented by use of our devices and access to our data while we’re on the move. I’ve got desktop, I’ve got a mobile phone, but that’s still not enough. Different ways of displaying and controlling devices while we’re on the go are, I believe, going to be increasingly important.”</p>
<p>“Is it the case that we’ll all be walking around with digital fingernails in five years’ time?” Hodges asks. “Maybe it is. Most likely, we’ll have a little ecosystem of these input devices. Some will be audio based, which is completely hands free. But there are a lot of cases where that’s not going to be appropriate. NailO is interesting because it’s thinking about much more subtle interactions, where gestures or speech input are socially awkward.”</p>
A new wearable device, NailO, turns the user’s thumbnail into a miniature wireless track pad. Here, it works as a X-Y coordinate touch pad for a smartphone. Research, School of Architecture + Planning, Media Lab, inventions and innovationsBrian Forde joins the MIT Media Lab as director of digital currencyhttp://newsoffice.mit.edu/2015/brian-forde-media-lab-director-digital-currency-0415
Former White House senior advisor for mobile and data innovation to coordinate a broad research initiative focused on digital currenciesWed, 15 Apr 2015 13:00:00 -0400http://newsoffice.mit.edu/2015/brian-forde-media-lab-director-digital-currency-0415<p>Brian Forde, former White House senior advisor for mobile and data innovation, has joined the MIT Media Lab as director of digital currency. In this newly created position, Forde will work with researchers across the Institute and leading experts at other universities around the world in a new initiative to address some of the most critical challenges to creating a safe, stable, and secure digital currency.</p>
<p>“We are fortunate to have Brian join the Media Lab to help organize an important research agenda to get cryptocurrencies right,” Media Lab Director Joi Ito says. “Brian’s experience mainstreaming emerging technologies from the rural mountains of Nicaragua to the White House will be invaluable as he tackles the challenges of digital currency — one of the most promising emerging technologies for the next 10 years.”</p>
<p>Forde has spent more than a decade at the nexus of technology and public policy. At the White House he was responsible for determining how the Obama administration would leverage open data and emerging technologies to address the president's national priorities. In this role, his work included launching initiatives in climate change, natural disasters, and open data, and leading the revitalization of Detroit. He was also key to formulating the White House Tech Inclusion and TechHire initiatives, bringing together leaders from the technology community, large corporations, and advocacy groups to support the hiring and training of more women and minorities in technology. In recognition of this work, Forde was named a Young Global Leader by the World Economic Forum.</p>
<p>"While at the White House, Brian led extraordinary initiatives to leverage the power of tech and innovation to make the future of America ever brighter," says Todd Park, White House advisor for technology. "From helping drive efforts to aid the revitalization of Detroit, to leveraging emerging technologies to support survivors during Hurricane Sandy, to breaking down barriers to a more diverse tech sector, the impact of Brian's work will continue to be felt by Americans for years to come."</p>
<p>Prior to joining the Obama administration, Forde brought advances in technology to Nicaragua, first as a Peace Corps volunteer and then as the co-­founder and CEO of Llamadas, S.A., one of the largest Internet phone service providers in that country.</p>
<p>“As a technologist, there’s no more exciting place to work than the MIT Media Lab,” Forde says. “The innovations that come out of the Media Lab have made a truly global impact. I look forward to working with the faculty and students and collaborating with developers, academics, entrepreneurs, governments, and nonprofits to help us get closer to a more robust and viable digital currency that could have tremendous benefits around the world.”</p>
<p>In launching this digital currency initiative, the Media Lab will work closely with faculty members, researchers,&nbsp;and students across the campus. More information is available at:&nbsp;<a href="https://medium.com/@medialab/launching-a-digital-currency-initiative-238fc678aba2">https://medium.com/@medialab/launching-a-digital-currency-initiative-238fc678aba2</a></p>
Brian Forde, former White House senior advisor for mobile and data innovation, joins the MIT Media Lab as director of digital currency. Media Lab, School of Architecture + Planning, AdministrationAt TED 2015, MIT visionaries reframe the futurehttp://newsoffice.mit.edu/2015/ted-2015-mit-thinkers-reframe-future-0410
At annual conference, creators and thinkers with MIT ties offered bold visions of the future.Fri, 10 Apr 2015 16:05:01 -0400Stephanie Eich | MIT Spectrumhttp://newsoffice.mit.edu/2015/ted-2015-mit-thinkers-reframe-future-0410<p>Titled “Truth and Dare,” the <a href="http://conferences.ted.com/TED2015/" target="_blank">2015 TED Conference</a> — held last month in Vancouver — took a challenging look at familiar beliefs and assumptions. Among the 70-plus speakers offering their views of the future were a number of MIT faces.&nbsp;</p>
<p><strong>Abe Davis</strong><br />
A <a href="http://people.csail.mit.edu/abedavis/" target="_blank">PhD student</a>&nbsp;studying computer science, Davis has collaborated with colleagues to create a “visual microphone.” On stage,&nbsp;<a href="http://blog.ted.com/abe-davis-incredible-tech-demo-at-ted2015/" target="_blank">Davis demonstrated his technology</a>: a high-speed camera films a potato chip bag from behind a piece of soundproof glass, while a person near the chip bag speaks. When the video is played back, the voice of someone reciting “Mary Had a Little Lamb” can be heard. Davis also shared a preview of his latest project, a video technique that analyzes how an object moves, and produces a clickable image that moves like the real object.</p>
<p><strong>Neri Oxman PhD '10</strong><br />
As director of the MIT Mediated Matters group, <a href="http://www.media.mit.edu/people/neri" target="_blank">Oxman explores</a> “material ecology” — the practice of integrating design principles inspired by nature into digital fabrication. In her TED presentation, Oxman&nbsp;<a href="http://blog.ted.com/creative-ignition-a-recap-of-the-fiery-talks-in-session-10-of-ted2015/" target="_blank">debuted a wearable digestive system</a>&nbsp;that could be worn by future inhabitants of Jupiter’s moons. Powered by photosynthesis, the system is designed to digest matter, absorb nutrients, and expel waste. “Think of it not as evolution by natural selection but&nbsp;<a href="http://spectrum.mit.edu/articles/natural-design/" target="_blank">evolution by natural design</a>,” suggested Oxman.</p>
<p><strong>Laura Schulz</strong><br />
As the principal investigator at&nbsp;MIT’s <a href="http://eccl.mit.edu/" target="_blank">Early Childhood Cognition Lab</a>, Schulz studies how children learn.&nbsp;On the TED stage, <a href="http://blog.ted.com/how-children-learn-so-much-from-so-little-so-quickly-laura-schulz-at-ted2015/" target="_blank">Schulz shared</a> what she has learned about human cognition during a decade of research “trying to figure out how children learn so much from so little so quickly.” In video of her laboratory experiments, she demonstrated how babies can make correct inferences about how certain objects will behave.</p>
<p><strong>Sara Seager</strong><br />
The&nbsp;<a href="http://seagerexoplanets.mit.edu/" target="_blank">MIT astrophysicist</a>, who holds the Class of 1941 Professorship, asked — and answered — the question, “<a href="http://blog.ted.com/sara-seager-on-the-hunt-for-exoplanets-at-ted2015/" target="_blank">Is there life out there</a>?” Seager’s research led to the discovery of the first exoplanet, or planet outside of our solar system, with an atmosphere. She hopes to find one capable of sustaining human life. “I’m devoting the rest of my life to&nbsp;<a href="http://spectrum.mit.edu/articles/searching-for-life/" target="_blank">finding other Earths</a>.”</p>
<p><strong>James Simons ’58</strong><br />
In a session focused on&nbsp;<a href="http://blog.ted.com/machines-that-learn-a-recap-of-session-3-at-ted2015/" target="_blank">machines that learn</a>, Simons, a mathematician and philanthropist, shared stories from a career guided by his passion for discovery. Following a stint as a code breaker at the National Security Agency, Simons went on to develop a novel approach to understanding financial markets, based on complex formulas and quantitative analytics. Today,&nbsp;Simons is an <a href="http://spectrum.mit.edu/continuum/a-passion-for-discovery-a-desire-to-give-back/" target="_blank">enthusiastic advocate</a>&nbsp;for math and science education; along with his wife, Marilyn, he runs the&nbsp;<a href="http://www.simonsfoundation.org/" target="_blank">Simons Foundation</a>, whose mission is to advance research in the basic sciences.</p>
<p><strong>Baratunde Thurston</strong><br />
Closing the 2015 conference, comedian and author&nbsp;<a href="http://baratunde.com/#baratunde" target="_blank">Baratunde Thurston</a> offered his own irreverent take on the intense and informative sessions that took place during the week.</p>
<p>Thurston is currently a&nbsp;<a href="http://directorsfellows.media.mit.edu/fellow-profiles/baratunde-thurston/" target="_blank">director’s fellow at the MIT Media Laboratory</a>. He is also cofounder of Cultivated Wit and the author of <em>The New York Times</em> bestseller, “How To Be Black.”</p>
Neri OxmanSpecial events and guest speakers, Students, Faculty, Alumni/aeTRANSFORM wins A&#039;Design Platinumhttp://newsoffice.mit.edu/2015/transform-wins-adesign-platinum-0403
Fri, 03 Apr 2015 12:13:01 -0400MIT Media Labhttp://newsoffice.mit.edu/2015/transform-wins-adesign-platinum-0403<p>TRANSFORM, a shape-changing furniture display developed by the Media Laboratory's Tangible Media group, won a Platinum A’Design Award in the arts, crafts, and ready-made design category. Winners were selected by a panel of 70 international jurors, comprising designers, members of the press, and academics.</p>
<p>TRANSFORM fuses technology and design to transform a piece of still furniture into a dynamic machine driven by a stream of data and energy. It aims to inspire viewers with both unexpected transformations and the aesthetics of the complex machine in motion. It was first exhibited in the Lexus Design Amazing exhibit in Milan in April 2014.</p>
<p>The TRANSFORM team is led by Professor Hiroshi Ishii, the Jerome B. Wiesner Professor of Media Arts and Sciences and head of the Tangible Media research group. Other contributors include Tangible Media group members Daniel Leithinger, Sean Follmer, Philipp Schoessler, Basheer Tome, Felix Heibeck, and alumnus Amit Zoran.</p>
<div class="cms-placeholder-content-video"></div>
TRANSFORM from the MIT Media LabMedia Lab, Awards, honors and fellowships, School of Architecture + Planning, Arts, Arts, Culture and TechnologyCerebral curiosityhttp://newsoffice.mit.edu/2015/student-profile-steven-keating-0401
Graduate student Steven Keating takes a problem-solving approach to his brain cancer.Wed, 01 Apr 2015 00:00:00 -0400Kevin Leonardi | Koch Institutehttp://newsoffice.mit.edu/2015/student-profile-steven-keating-0401<p>In 2007, Steven Keating had his brain scanned out of sheer curiosity.</p>
<p>Keating had joined a research study that included an MRI scan, and he asked that the scan’s raw data be returned to him. The scan revealed only a slight abnormality, near his brain’s smell center, which he was advised to have re-evaluated in a few years. A second scan, in 2010, showed no change, suggesting that the abnormality was most likely benign.</p>
<p>While the second scan provided reassurance, Keating’s knowledge of the abnormality — as a result of having access to the raw data from these scans — ultimately led to the detection of a baseball-sized tumor that was removed this past August.</p>
<p>Now a graduate student in the Department of Mechanical Engineering and based at the MIT Media Lab, Keating says that his curiosity saved his life — and that his experience with cancer has fueled a strong interest in advocating for open health data.</p>
<p><strong>Discovering a baseball-sized brain tumor</strong></p>
<p>Keating arrived at MIT in fall 2010 as the first student to join the Media Lab’s <a href="http://www.media.mit.edu/research/groups/mediated-matter">Mediated Matter Group</a>. Under his advisors — Neri Oxman, the group’s director and the Sony Corporation Career Development Associate Professor of Media Arts and Sciences, and David Wallace, a professor of mechanical engineering and engineering systems — Keating studies digital construction and biologically inspired design. He is pursuing a PhD in mechanical engineering with a minor in synthetic biology.</p>
<p>Last July, Keating noticed that he was experiencing a phantom vinegar smell for about 30 seconds every day. Knowing that his 2007 and 2010 research scans showed an abnormality near his smell center, he requested an MRI scan through MIT Medical. The scan revealed that the abnormality had grown into a tumor that needed to be removed as soon as possible.</p>
<p>Keating went to Brigham and Women’s Hospital (BWH) in Boston on Aug. 19 for surgery, accompanied and supported by his family and his girlfriend; Oxman; and Yoel Fink, a professor of materials science and director of MIT’s Research Laboratory of Electronics. The surgery was performed by neurosurgeon E.&nbsp;Antonio Chiocca, and Keating, though sedated, was kept awake while the tumor was removed. This was so doctors could ask him questions while they were probing and cutting brain tissue to ensure they were not damaging the brain’s language center. The 10-hour surgery was captured on video, which, at Keating’s request, was shared with him.</p>
<p>His recovery was quick: Keating was out of the hospital after two days, and he was back on the MIT campus within a week.</p>
<p>A tissue biopsy confirmed that his tumor was an IDH1-mutant malignant astrocytoma. In this type of brain cancer, which was only first identified by researchers in 2009, the mutated IDH enzyme leads to the production of 2HG, a novel, oncogenic metabolite. Through the <a href="http://ki.mit.edu/approach/bridge">Bridge Project</a> — a collaboration between MIT’s Koch Institute for Integrative Cancer Research and the Dana-Farber/Harvard Cancer Center — a cross-institutional research team is exploring how to use 2HG as a biomarker to detect and monitor IDH-mutant cancers.</p>
<p>Ovidiu Andronesi, a radiologist at Massachusetts General Hospital (MGH) and a collaborator on this research, applied this monitoring technology via MRI spectroscopy imaging to scan Keating’s brain before and after his surgery. These scans show the reduction of 2HG after doctors removed the tumor; the scans were also shared with Keating, at his request.</p>
<p>“As a cancer scientist, hearing Steven talk about 2HG spectroscopy screening as part of his clinical care is remarkable,” says Matthew Vander Heiden, the Eisen and Chang Career Development Associate Professor of Biology and a member of the Koch Institute, who is a leader on this research project. “IDH’s role in these cancers was only discovered six years ago, and it is incredible, as well as humbling, that Steven could benefit from some of the basic science done in this short time&nbsp;period since IDH mutations were&nbsp;recognized.”&nbsp;</p>
<p><strong>Diving deeper into the data</strong></p>
<p>Since the surgery, Keating’s curiosity has only become more acute. This has been fueled, in large part, by his close connection with his doctors and the data they were able to provide.</p>
<p>“Because of that connection, I had new options,” he says. “I asked for the surgery to videotaped, for my genome to be sequenced, and for the raw data from scans.”</p>
<p>With this abundance of data, Keating is able to apply his own research interests to develop an intimate understanding of his brain and his tumor. In Oxman’s Mediated Matter Group, Keating’s <a href="http://matter.media.mit.edu/people/bio/steven-keating">research</a> explores how to leverage 3-D printing and other fabrication methods to print everything from living organisms to entire buildings. With the resources available to him at the Media Lab, he and colleagues James Weaver and Ahmed Hosny at Harvard University’s Wyss Institute for Biologically Inspired Engineering have pored over his health data and created digital and 3-D-printed models of his tumor, brain, and surgically repaired skull.</p>
<p>To share his experiences as a patient-scientist, Keating gave a <a href="https://www.youtube.com/watch?v=-L-WFukOARU">talk</a> at the Koch Institute on Oct. 22 as part of a public event on IDH-mutant cancers. He returned on Nov. 21 to share his <a href="https://www.youtube.com/watch?v=-L-WFukOARU&amp;list=PLio40nSLOEvoTxmvGRgAEA4moLdVJV48C&amp;index=3">story</a> with the Koch Institute’s cancer researchers.</p>
<div class="cms-placeholder-content-video"></div>
<p>“Steven’s story is so inspiring in part because he is approaching his own cancer as a scientific problem, and he is actively seeking the data he needs to solve that problem,” says Tyler Jacks, director of the Koch Institute and the David H. Koch Professor in MIT’s Department of Biology. “After hearing his story, I think all of us were motivated to get back into the lab.”</p>
<p>“Steven’s insatiable curiosity is what science is all about,” adds Nancy Hopkins, a professor emerita of biology, and member of the Koch Institute, who attended both talks. “He addresses even his own cancer as if it were the latest fascinating experiment and as an opportunity to advance knowledge and help others.”</p>
<p><strong>Advocating for opening health data</strong></p>
<p>Given his up-close-and-personal experience with his health, Keating says he is now a strong believer in open sourcing and allowing patients to have easy access to their own health data. He says he was fortunate that his doctors were willing to share his data, but he did notice many small barriers along the way.</p>
<p>“My doctors are incredible for sharing my data and encouraging me to learn more from it,” Keating says. “However, the process raised some questions for me, as I received my data on 30 CDs, without easy tools to understand, learn, or share, and there was no genetic data included. Why CDs? Why limited access for patients to their own data? Can we have a simple, standardized share button at the hospital? Where is the Google Maps, Facebook, or Dropbox for health? It needs to be simple, understandable, and easy, as small barriers add up quickly.”</p>
<p>Keating says this cause has personal importance because having access to his health data not only led him to discover his tumor in the first place, but it also helped find the doctors and medical care he needed.</p>
<p>“Imagine having your whole medical record that you could not only share with doctors and scientists but also with friends and family, too,” he says. “Patients could get second opinions very easily, and doctors can follow what leaders in the field are doing.”</p>
<p>He says there are also huge mutual benefits when patients decide to share their health data with researchers, because it provides them with an actual case to study. The same is true when data is shared within patient communities, as those with precisely similar conditions are able to connect with one another.</p>
<p>Critics of open-source health data largely point to privacy considerations. This is especially true with regard to patients’ genetic data, which inherently reveals information about their family members. Many also worry about patients making medical decisions based on their own interpretation, against the advice of doctors. Furthermore, people say doctors might second-guess every one of their decisions to the point where the standard of care would decrease.</p>
<p>While Keating recognizes and respects these concerns, he says that the landscape of health care is changing — mentioning the rise of wearable technologies that collect personal health data, such as smart watches, as an example.</p>
<p>“I’m a strong believer in privacy, but if a patient wants to share, they should be able to,” he says. “Your personal being is your personal property, and you should have the right to share that data if you want to.”</p>
<p>This is an area where Keating is leading by example. He has open-sourced his health data on his <a href="http://stevenkeating.info/">personal website</a>, where his MRI scans and tumor model are available for download, and he has been meeting with government and hospital officials and leaders in the open-source health data field. He also has been exploring how links can be made between hospitals and open patient data repositories, such as <a href="http://sagebase.org/">Sage Bionetworks</a>, the <a href="http://www.personalgenomes.org/">Personal Genome Project</a>, <a href="http://www.cancercommons.org/">Cancer Commons</a>, and <a href="http://www.patientslikeme.com/">Patients Like Me</a>.</p>
<p>As a result of his advocacy for open-health data, the White House invited Keating to President Barack Obama’s unveiling of the Precision Medicine Initiative in January. Obama's proposal calls for increased federal&nbsp;investment in patient-powered research that accounts for individual differences in&nbsp;genes, environments, and lifestyles.&nbsp;One of the initiative’s primary objectives is&nbsp;accelerating design and testing of tailored cancer treatments through the National Cancer Institute.</p>
<p>Having completed proton therapy at MGH with radiation oncologist Helen Shih, Keating is now undergoing chemotherapy at BWH. All the while, his spirits remain high.</p>
<p>In an email he sent his friends and family before his surgery, Keating described life as a “wild ride.” However, as wild as it can be, he says that being an MIT student armed with data and a sense of curiosity can make all the difference.</p>
<p>“The benefit of MIT is that we can know it’s a ride, but it’s a scary ride unless you have information to make it a curious problem,” he says. “And if it’s a curious problem, it becomes an exciting ride.”</p>
Graduate student Steven Keating holds a 3-D-printed copy of his cancerous astrocytoma brain tumor. It was printed by Keating with data from Brigham and Women’s Hospital.Cancer, Graduate, postdoctoral, Koch Institute, Mechanical engineering, Media Lab, Students, Profile, Open access, Health care, 3-D printing, School of Engineering, School of Architecture + PlanningWear your social networkhttp://newsoffice.mit.edu/2015/wear-your-social-network-0331
Media Lab students aim to take common interests offline to encourage real-life connections.Tue, 31 Mar 2015 14:18:01 -0400Nicole Morell | MIT Alumni Associationhttp://newsoffice.mit.edu/2015/wear-your-social-network-0331<p>What if your likes and interests on social media were broadcast to the world offline? Would that make it easier for you to make real-world connections with people? That’s the idea behind <a href="http://fluid.media.mit.edu/social-textiles">Social Textiles, a wearable social network</a><a href="http://fluid.media.mit.edu/social-textiles" target="_blank"> </a>created by Media Lab students Viirj Kan, Katsuya Fujii, Judith Amores, and Chang Long Zhu Jin — members of the Fluid Interfaces and Tangible Media groups.</p>
<p>This wearable network is made up of t-shirts that light up when wearers share a common interest. When people wearing Social Textiles are within 12 feet of one another, their shirts will give a quick buzz on the shoulder to alert them that someone with a common interest is near. When the wearers identify each other and make a connection — by physically touching their new connection’s shirt — the shirt will light up, revealing their shared interest.</p>
<p><img alt="Social Textiles wearable social network from students in the MIT Media Lab" src="/sites/mit.edu.newsoffice/files/Social-Textiles-high-five.gif" style="width: 550px; height: 281px;" /></p>
<p>The idea for Social Textiles came from a class assignment in MAS.834 (Tangible Interfaces). “We were told to make something intangible, tangible,” explains Viirj Kan, and this got the group thinking about social media. “Online is good at connecting us at a distance, but not connecting us when we’re close,” Kan says. “We wanted to change that.”</p>
<p>These shirts don’t store information from your profiles on established social networks, but instead connect and light up around one or two common interests like a certain brand or community you belong to, like a university. Kan explains, “If you were to buy your shirt through a certain blog, that blog would be your connection and interest. Or if you bought your shirt at the COOP, that’s your connection.”</p>
<div class="cms-placeholder-content-video"></div>
<p>For now, Social Textiles are still in the development stages and aren’t available for purchase, though Kan does believe the wearable network belongs on store shelves. “People are really excited about it. At some point it should go out into the world, but the next steps are to test it on users more,” she says.</p>
<p>Until then, the combined Media Lab group is getting plenty of attention. As media outlets learn of Social Textiles, the group has to balance interviews and class time — adding to the learning experience. “It’s kind of like another class,” laughs Kan.</p>
<p>Read more stories about MIT alumni and campus culture on <em><a href="http://slice.mit.edu/" target="_blank">Slice of MIT</a>.</em></p>
Social Textiles from students in the MIT Media LabMedia Lab, Wearable sensors, Social media, Students, School of Architecture + PlanningCrowdsourced tool for depressionhttp://newsoffice.mit.edu/2015/crowdsourced-depression-tool-0330
Peer-to-peer application outperforms conventional self-help technique for easing depression, anxiety.Mon, 30 Mar 2015 00:00:01 -0400Larry Hardesty | MIT News Officehttp://newsoffice.mit.edu/2015/crowdsourced-depression-tool-0330<p>Researchers at MIT and Northwestern University have developed a new peer-to-peer networking tool that enables sufferers of anxiety and depression to build online support communities and practice therapeutic techniques.</p>
<p>In a study involving 166 subjects who had exhibited symptoms of depression, the researchers compared their tool with an established technique known as expressive writing. The new tool yielded better outcomes across the board, but it had particular advantages in two areas: One was in training subjects to use a therapeutic technique called cognitive reappraisal, and the other was in improving the mood of subjects with more severe symptoms.</p>
<p>“We really wanted to see two things,” says Rob Morris, who led the work as a PhD student in media arts and sciences at MIT. After graduating in February, Morris is now commercializing the technology through a New York-based company he co-founded, called Koko. “Could people get clinical benefits from it? That’s hypothesis one,” he says.</p>
<p>“Hypothesis two is, ‘Will people be engaged and use this regularly?’” Morris adds. “There’s a lot of great work in building web apps and mobile apps to provide psychotherapy without a therapist in the loop — it’s these self-guided programs. There’s almost a decade of research showing that these things can produce really profound improvements for people. The problem is that, once you release them out into the wild, people just don’t use them. The way we designed our platform was to really mimic some of the interaction paradigms that underlie very engaging social programs.”</p>
<p>On that score, too, the results of the study were encouraging. The average subject in the control group used the expressive-writing tool 10 times over the three weeks of the study, with each session lasting about three minutes. The average subject using the new tool logged in 21 times, with each session lasting about nine minutes.</p>
<p><strong>Buggy thinking</strong></p>
<p>Morris; his thesis advisor, Rosalind Picard, an MIT professor of media arts and sciences; and Stephen Schueller, a clinical psychologist at Northwestern, describe the study in a paper appearing this week in the <em>Journal of Medical Internet Research</em>.</p>
<p>Morris, who had majored in psychology as an undergrad at Princeton University, initially enrolled in a PhD program in psychology in California. But he concluded that a traditional psychology program wouldn’t grant him enough latitude in researching the therapeutic potential of information technology, a topic that quickly captured his interest. So he applied instead to do graduate work in Picard’s Affective Computing Group, which specifically investigates the intersection of computing technologies and human emotions.</p>
<p>“I was at MIT without an engineering degree and really trying to race to learn computer programming,” Morris recalls. He found himself spending a lot of time on a programmers’ question-and-answer site called Stack Overflow. “Whenever I had a bug or was stuck on something, I would go on there, and almost miraculously, this crowd of programmers would come and help me,” he says. “It was just this intuition that, just as we can get people on Stack Overflow to help us identify and fix bugs in code, perhaps we can harness a crowd to help us fix bugs in our thinking.”</p>
<p>People suffering from depression frequently exhibit what Morris describes as “maladaptive thought patterns”: You lose your job, and you conclude that you’ll never find another one; your roommate comes home and shuts herself up in her room, and you assume it’s because of something you’ve done.</p>
<p>Psychologists have sorted these thought patterns into categories. Predicting your future unemployability is an instance of “fortune-telling”; assuming you know your roommate’s motivations is “mind-reading.” Others include “overgeneralization,” “catastrophizing,” and “all-or-nothing thinking.”</p>
<p>Cognitive reappraisal involves, first, identifying maladaptive thought patterns and, second, trying to recast the events that precipitated them in a different light: The job you lost offered no room for promotion and wasn’t aligned with your interests, anyway; your roommate has been having trouble at work and may have just had a fight with a colleague.</p>
<p><strong>Strength in numbers</strong></p>
<p>A user of the new tool — which Morris calls Panoply — logs on and, in separate fields, records both a triggering event and his or her response to it. This much of the application was duplicated exactly for the expressive-writing tool used by the control group in the study.</p>
<p>With Panoply, however, members of the network then vote on the type of thought pattern represented by the poster’s reaction to the triggering event and suggest ways of reinterpreting it. As users demonstrate more and more familiarity with techniques of cognitive reappraisal, they graduate from describing their own experiences, to offering diagnoses of other people’s thought patterns, to suggesting reinterpretations.</p>
<p>“We really wanted to see that people are utilizing this skill over and over again, not only in response to their own stressors but also as teachers to other people,” Morris says. “We can surmise that it’s a little easier to practice some of these psychotherapeutic skills for other people before turning them toward themselves. But we don’t have data supporting that.”</p>
<p>For their study, Morris, Picard, and Schueller recruited subjects who described themselves as under stress, something that correlates highly with depression. Volunteers were asked to complete three questionnaires. One is a depression measure that’s standard in the field. Another assesses perseverative thinking, and the third assesses skill at cognitive reappraisal. After three weeks using either Panoply or the expressive-writing tool, the subjects again completed the same three questionnaires.</p>
<p><strong>Network effects</strong></p>
<p>To simulate a large network of users — and ensure that Panoply users would receive replies even if they were posting in the middle of the night — Morris hired online workers through Amazon’s Mechanical Turk crowdsourcing application to supplement the comments made by study subjects. Each Mechanical Turk worker received a brief training in cognitive reappraisal, and about 1,000 contributed to the study.</p>
<p>“It took a lot of time to figure out how to teach people these skills and give them examples of what to do in a way that is easily understood in a handful of minutes,” Morris says. “Some of them wanted to sign up afterwards. They were like, ‘Wow, I never knew I had these bugs in my thinking, too.’”</p>
<p>“What I like about the crowdsourcing idea is that it’s sort of tackling two things in a nice way,” says James Gross, a professor of psychology at Stanford University, who has studied cognitive reappraisal. “One is that reappraisal, although powerful, can break down when you most need it. And so this is saying, ‘Hey, instead of relying on intrinsic regulation, let’s try extrinsic regulation, where we’re going to get some help from other people.’</p>
<p>“But the second thing is that when you’re depressed, you can withdraw from other people. So now you’ve got this double whammy, where you’ve got a high level of negative emotion, making it more difficult to reappraise, and you’re isolating yourself from other people, which means that you’re not going to be as likely to get extrinsic regulation. What they’ve done is nicely address both of these issues by saying, ‘Hey, we can help with reappraisal, even if you’re feeling a bit depressed, by helping you leverage outside input that you wouldn’t otherwise get. I think this is a promising approach.”</p>
Research, Mental health, Depression, Anxiety, Brain and cognitive sciences, Health, Health care, Health sciences and technology, Crowdsourcing, Media Lab, School of Architecture + PlanningTemple Grandin: Look at what people can do, not what they can’thttp://newsoffice.mit.edu/2015/temple-grandin-talk-0318
In talks at MIT, noted behavioral expert suggests encouraging skills of people with autism.Wed, 18 Mar 2015 12:45:00 -0400David L. Chandler | MIT News Officehttp://newsoffice.mit.edu/2015/temple-grandin-talk-0318<p>When she was just two, doctors advised Temple Grandin’s mother that her child would probably need to be institutionalized for life due to her autism. But her mother would have none of that, and instead focused on teaching her daughter basic social and life skills, even though she didn’t begin to speak until age four.</p>
<p>That approach worked with dramatic success. Grandin went on to graduate from college, earn a PhD, design new systems for animal handling that revolutionized the meat processing industry, earn a faculty position at Colorado State University, and become a leading spokesperson and author on dealing with autism. In public talks on Monday and Tuesday, she shared her insights at MIT’s Media Laboratory.</p>
<p>“I see too many kids who aren’t learning the basics,” said Grandin, referring to skills such as interacting with people socially and making things with one’s hands. She strongly recommended that children on the autism spectrum be encouraged, coached, and instructed in basic skills and simple tasks that build useful self-discipline — “Walking dogs for neighbors, doing simple chores, shaking hands, having a paper route.”</p>
<p>As they grow into adulthood, she said, it’s useful to enter internships and other ways to try out different careers. “I thought I was going to be an experimental psychologist, I didn’t think I would be designing slaughterhouses,” she recalled.</p>
<p>But designing slaughterhouses has indeed been the basis of her career. Early on, as someone who had suffered intensely from anxiety attacks until those were controlled by low-dose, anti-anxiety medication, she noticed that the way cattle were traditionally led into slaughterhouses let the approaching animals see, hear, and smell the carnage just ahead. This often caused them to panic and thus made them more difficult to handle — which caused the release of fear and stress hormones into their bloodstream. The new, more humane systems she designed to replace these traditional designs are now used in about half of all U.S. and Canadian slaughterhouses. “I think that’s pretty good for someone who people thought was retarded,” she said.</p>
<p>As she has also described in her books, Grandin explained that she is a visual thinker. The key moment in her revolutionizing of cattle handling was when she crawled through the cattle chute herself, taking in the sights and sounds firsthand from the animal’s perspective — and seeing things that seemed obvious from that point of view, but had been overlooked by generations of designers of such facilities.</p>
<p>Time and again as people asked questions during her two appearances at MIT, she urged people to “be specific — I can’t deal with abstractions.” Before answering a vague question about how to help a child with autism, she wanted to know the details: How old is the child? What does he like to do? Does he talk? What are his interests?</p>
<p>Many people on the autism spectrum are fascinated by things that move, she said: cars, planes, trains. Such interests can become fixations that lead nowhere, she said, but they can also be channeled. If a child loves trains, parents or teachers could use trains as examples to teach math or other skills, or look at where the trains are going and study those places. “Broaden it out,” she said, but in a way that’s “still linked to that favorite thing.”</p>
<p>When she herself was in the third grade, she said, she had a fixation with drawing horses’ heads. Instead of trying to stop her, teachers encouraged her to broaden that interest and start drawing other things. That approach worked, and in the end those drawing skills became very useful as she began to design buildings.</p>
<p>When asked about the role of medications in treating autism, she said that “way too many medications are given out to little kids. On the other hand, careful, sensible use of medications” can be very effective. In her own case, she said that brain scans later in life revealed that her amygdala — a part of the brain that is responsible for fear reactions, among other things — is three times larger than normal, perhaps accounting for anxiety attacks that struck without warning through much of her early life, until they were controlled through medication.</p>
<p>To visualize what her life was like before the medications, she asked people to imagine that the building they were in — the Media Lab — contained 100 extremely venomous snakes, which mostly remained hidden but might suddenly appear in front of you without warning, at any moment. “That’s the way I was until I got antidepressants,” she said.</p>
<p>In offering guidance, she suggested, the key is not to be angry or overprotective, but to offer concrete instructions. As an example, she said when as a young child she lapped her ice cream off the plate at school, a teacher just picked up the plate and said, “You’re not a dog” — and the lesson was learned. Today, she said, “I see overprotection [of autistic children] on even the smallest tasks.”</p>
<p>She added that “people are always looking for magic breakthroughs. But it’s more like slogging on, one step at a time,” to master necessary skills. In general today, she said, “kids are not doing enough hands-on things.”</p>
<p>Grandin referred with contempt to the latest version of the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders, known as DSM-5, in which Asperger’s syndrome — a very mild and widespread form of autism — is eliminated as a separate diagnosis and instead is included within the autism spectrum. “DSM-5 is a mess,” she said, because it lumps together so many different conditions, from “people who can’t dress themselves,” to those who may be “presidents of companies in Silicon Valley.”</p>
<p>She pointed out that many people at places such as MIT likely fall on the high-functioning end of that spectrum, and may have extraordinary abilities in some areas, but a lack of skills in other areas, such as social and interpersonal relations — which may cause problems in situations such as job interviews. To counter that, she suggested, “When you’re a really weird geek, the way to sell yourself is to show them your skills. … How do you sell yourself? I did it by showing off my work.”</p>
<p>The defining characteristic of autism, she explained, is a very uneven set of skills and abilities. That’s true of everyone, but the discrepancies are more extreme in those with autism, who tend to be “very good at some things, and not good at something else.” For those working with students or family members with autism, Grandin said, it’s important both to provide instruction on basic skills that may be lacking, and to support and reinforce the person’s skills and interests.</p>
<p>Such people may need a bit of extra help or accommodation along the way, she added, such as leaving a bit more time than usual to wait for a response to a question. “I want to see kids that are different get good outcomes,” she said. “I'm getting worried about that quirky kid who gets shunted aside.”</p>
<p>They key thing, she said, is to “Look at what they can do, not what they can’t do.”</p>
Temple GrandinSpecial events and guest speakers, School of Architecture + Planning, Media Lab, Autism, Anxiety, Mental health, Health, Psychiatric disorders, Brain and cognitive sciences, DiversityTeaching programming to preschoolershttp://newsoffice.mit.edu/2015/teaching-preschoolers-programming-0312
System that lets children program a robot using stickers embodies new theories about programming languages.Wed, 11 Mar 2015 23:59:59 -0400Larry Hardesty | MIT News Officehttp://newsoffice.mit.edu/2015/teaching-preschoolers-programming-0312<p>Researchers at the MIT Media Laboratory are developing a system that enables young children to program interactive robots by affixing stickers to laminated sheets of paper.</p>
<p>Not only could the system introduce children to programming principles, but it could also serve as a research tool, to help determine which computational concepts children can grasp at what ages, and how interactive robots can best be integrated into educational curricula.</p>
<p>Last week, at the Association for Computing Machinery and Institute of Electrical and Electronics Engineers’ International Conference on Human-Robot Interaction, the researchers presented the results of an initial study of the system, which investigated its use by children ages 4 to 8.</p>
<p>“We did not want to put this in the digital world but rather in the tangible world,” says Michal Gordon, a postdoc in media arts and sciences and lead author on the new paper. “It’s a sandbox for exploring computational concepts, but it’s a sandbox that comes to the children’s world.”</p>
<p>In their study, the MIT researchers used an interactive robot called Dragonbot, developed by the Personal Robots Group at the Media Lab, which is led by associate professor of media arts and sciences Cynthia Breazeal. Dragonbot has audio and visual sensors, a speech synthesizer, a range of expressive gestures, and a video screen for a face that can assume a variety of expressions. The programs that children created dictated how Dragonbot would react to stimuli.</p>
<p>“It’s programming in the context of relational interactions with the robot,” says Edith Ackermann, a developmental psychologist and visiting professor in the Personal Robots Group, who with Gordon and Breazeal is a co-author on the new paper. “This is what children do — they’re learning about social relations. So taking this expression of computational principles to the social world is very appropriate.”</p>
<p><strong>Lessons that stick</strong></p>
<p>The root components of the programming system are triangular and circular stickers — which represent stimuli and responses, respectively — and arrow stickers, which represent relationships between them. Children can first create computational “templates” by affixing triangles, circles, and arrows to sheets of laminated paper. They then fill in the details with stickers that represent particular stimuli — like thumbs up or down — and responses — like the narrowing or widening of Dragonbot’s eyes. There are also blank stickers on which older children can write their own verbal cues and responses.</p>
<p>Researchers in the Personal Robotics Group are developing a computer vision system that will enable children to convey new programs to Dragonbot simply by holding pages of stickers up to its camera. But for the purposes of the new study, the system’s performance had to be perfectly reliable, so one of the researchers would manually enter the stimulus-and-response sequences devised by the children, using a tablet computer with a touch-screen interface that featured icons depicting all the available options.</p>
<p>To introduce a new subject to the system, the researchers would ask him or her to issue an individual command, by attaching a single response sticker to a small laminated sheet. When presented with the sheet, Dragonbot would execute the command. But when it’s presented with a program, it instead nods its head and says, “I’ve got it.” Thereafter, it will execute the specified chain of responses whenever it receives the corresponding stimulus.</p>
<p>Even the youngest subjects were able to distinguish between individual commands and programs, and interviews after their sessions suggested that they understood that programs, unlike commands, modified the internal state of the robot. The researchers plan additional studies to determine the extent of their understanding.</p>
<p><strong>Paradigm shift</strong></p>
<p>The sticker system is, in fact, designed to encourage a new way of thinking about programming, one that may be more consistent with how computation is done in the 21st century.</p>
<p>“The systems we’re programming today are not sequential, as they were 20 or 30 years back,” Gordon says. “A system has many inputs coming in, complex state, and many outputs.” A cellphone, for instance, might be monitoring incoming transmissions over both Wi-Fi and the cellular network while playing back a video, transmitting the audio over Bluetooth, and running a timer that’s set to go off when the rice on the stove has finished cooking.</p>
<p>As a graduate student in computer science at the Weizmann Institute of Science in Israel, Gordon explains, she worked with her advisor, David Harel, on a new programming paradigm called scenario-based programming. “The idea is to describe your code in little scenarios, and the engine in the back connects them,” she explains. “You could think of it as rules, with triggers and actions.” Gordon and her colleagues’ new system could be used to introduce children to the principles of conventional, sequential programming. But it’s well adapted to scenario-based programming.</p>
<p>“It’s actually how we think about how programs are written before we try to integrate it into a whole programming artifact,” she says. “So I was thinking, ‘Why not try it earlier?’”</p>
The Personal Robots Group at the Media Lab have developed an interactive robot called Dragonbot to teach young children how to program. Dragonbot has audio and video sensors, a speech synthesizer, a range of expressive gestures, and a video screen for a face that assumes various expressions. Children created programs that dictated how Dragonbot would react to stimuli.Research, School of Architecture + Planning, Media Lab, education, Education, teaching, academics, Robotics, STEM educationFinger-mounted reading device for the blindhttp://newsoffice.mit.edu/2015/finger-mounted-reading-device-blind-0310
Audio feedback helps user scan finger along a line of text, which software converts to speech.Tue, 10 Mar 2015 00:00:00 -0400Larry Hardesty | MIT News Officehttp://newsoffice.mit.edu/2015/finger-mounted-reading-device-blind-0310<p>Researchers at the MIT Media Laboratory have built a prototype of a finger-mounted device with a built-in camera that converts written text into audio for visually impaired users. The device provides feedback — either tactile or audible — that guides the user’s finger along a line of text, and the system generates the corresponding audio in real time.</p>
<p>“You really need to have a tight coupling between what the person hears and where the fingertip is,” says Roy Shilkrot, an MIT graduate student in media arts and sciences and, together with Media Lab postdoc Jochen Huber, lead author on a new paper describing the device. “For visually impaired users, this is a translation. It’s something that translates whatever the finger is ‘seeing’ to audio. They really need a fast, real-time feedback to maintain this connection. If it’s broken, it breaks the illusion.”</p>
<p>Huber will present the paper at the Association for Computing Machinery’s Computer-Human Interface conference in April. His and Shilkrot’s co-authors are Pattie Maes, the Alexander W. Dreyfoos Professor in Media Arts and Sciences at MIT; Suranga Nanayakkara, an assistant professor of engineering product development at the Singapore University of Technology and Design, who was a postdoc and later a visiting professor in Maes’ lab; and Meng Ee Wong of Nanyang Technological University in Singapore.</p>
<p>The paper also reports the results of a usability study conducted with vision-impaired volunteers, in which the researchers tested several variations of their device. One included two haptic motors, one on top of the finger and the other beneath it. The vibration of the motors indicated whether the subject should raise or lower the tracking finger.</p>
<p>Another version, without the motors, instead used audio feedback: a musical tone that increased in volume if the user’s finger began to drift away from the line of text. The researchers also tested the motors and musical tone in conjunction. There was no consensus among the subjects, however, on which types of feedback were most useful. So in ongoing work, the researchers are concentrating on audio feedback, since it allows for a smaller, lighter-weight sensor.</p>
<p><strong>Bottom line</strong></p>
<p>The key to the system’s real-time performance is an algorithm for processing the camera’s video feed, which Shilkrot and his colleagues developed. Each time the user positions his or her finger at the start of a new line, the algorithm makes a host of guesses about the baseline of the letters. Since most lines of text include letters whose bottoms descend below the baseline, and because skewed orientations of the finger can cause the system to confuse nearby lines, those guesses will differ. But most of them tend to cluster together, and the algorithm selects the median value of the densest cluster.</p>
<p>That value, in turn, constrains the guesses that the system makes with each new frame of video, as the user’s finger moves to the right, which reduces the algorithm’s computational burden.</p>
<p>Given its estimate of the baseline of the text, the algorithm also tracks each individual word as it slides past the camera. When it recognizes that a word is positioned near the center of the camera’s field of view — which reduces distortion — it crops just that word out of the image. The baseline estimate also allows the algorithm to realign the word, compensating for distortion caused by oddball camera angles, before passing it to open-source software that recognizes the characters and translates recognized words into synthesized speech.</p>
<p>In the work reported in the new paper, the algorithms were executed on a laptop connected to the finger-mounted devices. But in ongoing work, Marcel Polanco, a master’s student in computer science and engineering, and Michael Chang, an undergraduate computer science major participating in the project through MIT’s Undergraduate Research Opportunities Program, are developing a version of the software that runs on an Android phone, to make the system more portable.</p>
<p>The researchers have also discovered that their device may have broader applications than they’d initially realized. “Since we started working on that, it really became obvious to us that anyone who needs help with reading can benefit from this,” Shilkrot says. “We got many emails and requests from organizations, but also just parents of children with dyslexia, for instance.”</p>
<p>“It’s a good idea to use the finger in place of eye motion, because fingers are, like the eye, capable of quickly moving with intention in x and y and can scan things quickly,” says George Stetten, a physician and engineer with joint appointments at Carnegie Mellon’s Robotics Institute and the University of Pittsburgh’s Bioengineering Department, who is developing a <a href="http://www.fingersight.com/">finger-mounted device</a> that gives visually impaired users information about distant objects. “I am very impressed with what they do.”</p>
Researchers at the MIT Media Lab have created a finger-worn device with a built-in camera that can convert text to speech for the visually impaired. Research, Assistive technology, School of Architecture + Planning, Media Lab, AlgorithmsTemple Grandin to speak at MIThttp://newsoffice.mit.edu/2015/temple-grandin-speaking-at-mit-0306
Famed animal behavior expert and autism activist will give two public talks at the Media Lab.Fri, 06 Mar 2015 15:23:01 -0500Alexandra Kahn | MIT Media Labhttp://newsoffice.mit.edu/2015/temple-grandin-speaking-at-mit-0306<p>Temple Grandin, autism activist, bestselling author, and expert on animal behavior, will be coming to MIT for a series of three events from Monday, March 11 through Tuesday, March 17. Sponsored by the MIT Media Laboratory's <a href="http://www.media.mit.edu/special/groups/advancing-wellbeing" target="_blank">Advancing Wellbeing</a> initiative, and funded by the <a href="http://www.rwjf.org/" target="_blank">Robert Wood Johnson Foundation</a>, all events are free and open to the public.</p>
<p>Grandin’s books about her interior life as an autistic person have increased the world’s understanding of the condition. In addition, she is also revered for her pioneering work to promote animal welfare. Grandin has lectured widely about how her first-hand experiences with anxiety provided the motivation for her breakthrough research that has brought about more humane handling of livestock.</p>
<p>The week’s events will kick off with the screening of "Temple Grandin," a film that portrays a young woman's perseverance and determination while struggling with the isolating challenges of autism. It stars Claire Danes, Julia Ormond, Catherine O'Hara, and David Strathairn. It will be shown Wednesday, March 11, 7:30 p.m. in <a href="https://whereis.mit.edu/?go=E15" target="_blank">Room E15-070</a> (Bartos Theatre, lower level, MIT Media Lab). Note: There is limited seating available on a first-come, first-served basis.</p>
<p>On Monday, March 16, at 4:30 p.m., Grandin will give a special talk, "Helping Different Kinds of Minds Succeed,” in <a href="https://whereis.mit.edu/?go=E14" target="_blank">Room&nbsp;E14-674</a>, the Media Lab’s sixth-floor multipurpose room. This event is co-presented with the Simons Center for the Social Brain.</p>
<p>The final event will be a conversation between Grandin and Rosalind Picard, professor of Media Arts and Sciences, who heads the Media Lab’s Affective Computing research group and is a co-leader of the lab’s Advancing Wellbeing initiative. It will be held on Tuesday, March 17 at 11:00 a.m. in the Media Lab’s third-floor atrium (<a href="https://whereis.mit.edu/?go=E14" target="_blank">Building E14</a>).</p>
<p>Grandin was born in Boston, Massachusetts. By age two she still had no speech and all the signs of severe autism. Going against doctors’ recommendations, her mother did not institutionalize her, but rather spent many hours ensuring that she received intensive therapy. She went on to obtain a BA at Franklin Pierce College in 1970, an MS in animal science from Arizona State University in 1975, and a PhD in animal science from the University of Illinois in 1989. She is currently a professor at Colorado State University. Grandin is known internationally for her extensive work on the design of animal handling facilities: Half the cattle in the U.S. and Canada are handled in equipment she has designed, focused on humane treatment of animals. In 2010 Grandin was listed in the <em>TIME</em> 100 list of the most influential people in the “heroes” category. She is the subject of the documentary "The Woman Who Thinks Like a Cow" (2006), and was also a subject in Errol Morris’s series "First Person." She has been featured in <em>The New York Times, Forbes, Discover </em>magazine, and NPR<em>. </em></p>
<p>With support from the Robert Wood Johnson Foundation, the Media Lab's Advancing Wellbeing initiative addresses the role of technology in shaping our health, and explores new approaches and solutions to wellbeing. The program is built around education and student mentoring; prototyping tools and technologies that support physical, mental, social, and emotional wellbeing; and community initiatives that will originate at the Media Lab, but be designed to scale.&nbsp;&nbsp;</p>
Temple GrandinMedia Lab, Special events and guest speakers, Autism, Animals, School of Architecture + PlanningConsumer-friendly makershttp://newsoffice.mit.edu/2015/sifteo-cubes-to-consumer-drones-0217
Media Lab alumni’s success with “smart” gaming blocks led to an acquisition deal to make consumer drones.Tue, 17 Feb 2015 00:00:00 -0500Rob Matheson | MIT News Officehttp://newsoffice.mit.edu/2015/sifteo-cubes-to-consumer-drones-0217<p>Entrepreneurship can sometimes take people down unexpected paths.</p>
<p>Just ask the two co-founders of MIT Media Lab spinout Sifteo: Their success in rapidly commercializing their popular “smart” gaming blocks recently led to an acquisition by 3D Robotics (3DR) to help build the company’s newest consumer drones.</p>
<p>In 2011, alumni David Merrill SM ’04, PhD&nbsp;’09 and Jeevan Kalanithi SM ’07 had turned a Media Lab project into a popular gaming platform, Sifteo Cubes: plastic blocks, about an inch and a half on each side, equipped with color touch displays. Sensors detect the blocks’ movements and nearby blocks, and wireless technology allows them to communicate with each other to enable games and educational experiences.</p>
<p>People can pile, group, sort, tilt, and knock the cubes together to play various games, developed by Sifteo and a community of developers. Some games are simple, such as bouncing a ball across multiple displays, or educational, such as unscrambling words by sliding the tiles into the right order. There are also adventure games, where users uncover sections of a maze for characters by moving cubes up and down, and putting them adjacent to each other.</p>
<p>Energized by Merrill’s 2009 <a href="http://www.ted.com/talks/david_merrill_demos_siftables_the_smart_blocks?language=en">TED Talk on the devices</a>, which went viral, Sifteo’s first run of 1,000 cubes sold out in just 13 hours. For a few years, Sifteo’s first- and second-generation cubes found a niche customer base, while catching the eye of electronics and hardware companies enamored with the novel devices. &nbsp;</p>
<p>Among those interested parties was hobbyist drone manufacturer 3DR, which acquired Sifteo last July, noting in a press release Sifteo’s expertise in bringing innovative consumer electronics to a wide customer base. (Sifteo Cubes have since been taken out of production.)</p>
<p>Now members of the Sifteo team — including&nbsp;Kalanithi and Merrill —&nbsp;will help 3DR build out the consumer drone market over the next few years. While he can’t provide details, Kalanithi says, “It’s really about a desire to build drones at scale that can get into the hands of everyone.”</p>
<p><strong>Connecting at the Media Lab</strong></p>
<p>The Sifteo story began one spring afternoon in 2006. Kalanithi and Merrill, who became friends as Stanford University undergraduates and had reconnected at MIT, were at a table in the Media Lab’s kitchen, brainstorming ways in which people could use their hands to physically interact with data.</p>
<p>At the time, the two were becoming deeply immersed in the Media Lab’s tangible-computation culture, taking classes like MAS 834 (Tangible Interfaces) and working with other students on sensor networks and “smart” devices. “Those ideas were constantly bouncing around in our heads,” Kalanithi says.</p>
<p>In that conversation, they landed on a novel idea: “smart blocks” for people to physically manipulate computer data. “You have all this information — emails, desktop files, photos —&nbsp;and one mouse pointer to interact with it, like having a single finger tip,” Merrill says. “If you had a pile of LEGOs on the table, you’d use both hands and your body – pushing the piles around. We wanted to build an interface for interacting with information on a computer that was more like a pile of LEGOs.”</p>
<p>Initially labeling the system “The Siftable Computer,” Kalanithi and Merrill built prototypes —&nbsp;tiles of wood and acrylic, with photos plastered on them — to experiment with in the lab, explaining their vision and gaining feedback.</p>
<p>At the time, for his MIT thesis, Kalanithi was developing electronic devices called “Connectibles,” which he described as “tangible social media.” These wirelessly connected tiles — equipped, for instance,&nbsp;with a dial that illuminated onboard LEDs — could be exchanged and plugged into outlets on a plywood board. If two people exchanged Connectibles, for instance, whenever they turned on the lights of their own tiles, the lights of the exchanged Connectibles would also illuminate.</p>
<p>For this project, Kalanithi was implanting working displays into the tiles. “We thought, ‘We can use these little displays for smart blocks,’” he says.</p>
<p>Nine months of prototyping — with help from other Media Lab students — led to Siftables, small computers that could display images and sense nearby blocks and movement. They could be stacked and shuffled to do math, and play music and basic games. “From the original core —&nbsp;physical embodiment of digital information —&nbsp;the details emerged through a series of prototypes and bouncing ideas off other smart people at the Media Lab,” Merrill says.</p>
<p><strong>Bull by the horns</strong></p>
<p>But things changed dramatically in 2009 when Merrill delivered a TED Talk on Siftables, as part of a larger focus on Media Lab projects. A video of the talk went viral, amassing 1 million views and garnering interest from the press, tech circles, and consumers.</p>
<p>“We thought we had to take the bull by the horns,” Kalanithi says. “We could either make them into cool research tools in low volumes, or we could fully commercialize them as far and wide as possible.”</p>
<p>Later that year, the two launched Sifteo in San Francisco, and started completely re-engineering the Siftables code and hardware with cheaper parts for mass production. They conducted intensive market research, including surveys and in-depth interviews with families with young kids — who they assumed would be the product’s primary customers.</p>
<p>In the early days of the startup, they also received sage advice from seasoned entrepreneurs in MIT’s Venture Mentoring Service (VMS) — on MIT’s campus and in San Francisco. A key lesson from VMS, Kalanithi says, was the necessity of developing a mission statement.</p>
<p>“Before we knew what we were doing, we had to find a vision, and core values,” he says. “I thought it was stupid at that time, but I was wrong. Because it’s not a couple of guys starting a company, it’s a group of people that need to cooperate in an intensely productive way. So you need an idea everyone can align on. If you can establish that, and let people rip, then good things will happen.”</p>
<p><strong>Once makers, always makers</strong></p>
<p>Sifteo would go on to sell thousands of Sifteo Cubes to a customer base of teachers, families, makers, and gamers, among others. A software kit created by Sifteo’s engineers allowed developers to create more than 50 games for two generations of the cubes.</p>
<p>Yet the toys never found a broader audience, Kalanithi says, due in part to the sharp rise of iPads and tablets — which allowed for touch-based gaming. To compete, Sifteo had tried developing a third generation of games that better used the physicality of the cubes. For instance, one game involved stacking cubes at one end of the table, and sliding another as a puck to bump the tower without toppling it.</p>
<p>But it wasn’t enough for the long run, especially given the electronic competition. “Turns out [iPads and tablets] were close enough to the experience we were providing,” Kalanithi says. “Even though we, and certain groups of people, saw the differences very clearly, most people didn’t instantly.”</p>
<p>As it turns out, however, a host of electronics and hardware companies had started courting Sifteo with acquisition deals — including 3DR. Open to taking its technology and expertise down new paths, Sifteo accepted 3DR’s offer.</p>
<p>Reflecting on the deal, Merrill, now vice president of enabling technology at 3DR, says transitioning MIT-trained engineers of consumer electronics to a drone company is, in fact, a logical move. “We both bring a strong culture of shipping products, being interested in building cool stuff, and being enthusiastic about the possibilities of technology,” he says. “It’s a near-perfect cultural fit.”</p>
Among the games designed for the Sifteo Cubes were adventure games, like the one shown here, where users uncover sections of a maze for characters by moving cubes up and down, and putting them adjacent to each other.School of Architecture + Planning, Media Lab, Innovation and Entrepreneurship (I&E), Startups, Alumni/ae, inventions and innovations, Computer science and technology, Venture Mentoring Service, Unmanned aerial vehicles (UAV)Keeping health care cleanhttp://newsoffice.mit.edu/2015/smart-devices-track-hospital-hand-hygiene-0202
Startup’s smart devices track hand-washing in hospitals to help reduce the spread of infection.Mon, 02 Feb 2015 00:00:02 -0500Rob Matheson | MIT News Officehttp://newsoffice.mit.edu/2015/smart-devices-track-hospital-hand-hygiene-0202<p>The World Health Organization (WHO) cites good hand hygiene as a major factor in stopping the spread of hospital-acquired infections (HAIs) caused by exposure to various bacteria.</p>
<p>In fact, in 2009 the WHO released its “Five Moments of Hand Hygiene” guidelines, which pinpoint five key moments when hospital staff should wash their hands: before touching a patient, before aseptic procedures, after possible exposure to bodily fluids, after touching a patient, and after touching a patient’s surroundings.</p>
<p>But it’s been difficult to track workers’ compliance with these guidelines. Administrators usually just spend a few days a month monitoring health care workers, noting hand-hygiene habits on a WHO checklist.</p>
<p>Now General Sensing — co-founded by MIT Media Lab alumni Jonathan Gips SM ’06 and Philip Liang SM ’06 —&nbsp;is using smart devices to monitor hand hygiene among hospital staff and ensure compliance with WHO guidelines. The aim, Liang says, is to help reduce the spread of HAIs, which affected one in 25 U.S. hospital patients in 2010, according to the Centers for Disease Control and Prevention.</p>
<p>Called MedSense Clear, the system revolves around a badge worn by hospital staff. The badge can tell when a worker comes near or leaves a patient’s side, and whether that worker has used an alcohol-based sanitizer or soap dispenser during those times. It also vibrates to remind workers to wash up. The badge then sends data to a base station that pushes the data to a Web page where individuals can monitor their hand-washing, and administrators can see data about overall hand-hygiene compliance among staff.</p>
<p>A 2014 study in the <em>Journal of Infection and Public Health</em> concluded that compliance with WHO hand-washing rules jumped 25 percent in one month when staff used MedSense in a 16-bed hospital unit at Salmaniya Medical Complex in Bahrain. Currently, the Royal Brompton and Harefield hospital in London is studying the correlation between the MedSense system and reduction in HAIs.</p>
<p>The startup is also now developing a system to monitor hospital workflow, with aims of pinpointing areas where time and resources may be wasted by unnecessary wait times for patients. “We’re trying to drive safety with hand hygiene, and drive efficiency by reducing waste,” Gips says. “Really, we’re trying to be a support system for the hospital.”</p>
<p><strong>In the “patient zone” </strong></p>
<p>MedSense consists of four smart devices, including the badge, that communicate with each other. Beacons installed near patients are tuned to cover small or large areas, creating a “patient zone.”</p>
<p>The badge knows if the wearer has washed his or her hands, because the system’s soap dispensers are designed to sense pressure when their nozzles are pressed down. If the wearer uses the dispenser, the holder sends that information to the smart badge.</p>
<p>When a badge-wearer enters a patient zone and has not performed hand hygiene, the badge vibrates to remind the wearer to wash up, and does so again when the wearer leaves the zone.</p>
<p>“We think it’s important that the system provides feedback when it’s actionable without getting in the way of delivering care,” Gips says.</p>
<p>The system’s final component is a base station, set up near nursing stations. When workers are within 50 feet of the station, the station routes the badge’s data over the network to an online dashboard, called MedSense HQ. These stations also have 16 charging slots for the badge’s flat batteries.</p>
<p>In MedSense HQ, individuals can track, for instance, what times they missed washing their hands, or what times of the day they’re better at hand hygiene. Administrators can see aggregated data indicating, for instance, which units are more or less compliant with hand-hygiene protocols.</p>
<p>What’s interesting, Liang says, is that when it’s used in tandem with visual observation, MedSense consistently shows that hand hygiene increases to about 90 percent as staff know they’re being watched by administrators, a phenomenon called the Hawthorne Effect.</p>
<p>“We’ll look at the data and can pinpoint when the wearer is being watched. You’ll see the data spike and then go back down when [the observer] leaves,” he says.</p>
<p>MedSense, on the other hand, removes that observer bias, he says, and can collect data around the clock.</p>
<p><strong>“Clean” start</strong></p>
<p>General Sensing may tackle a serious health care issue, but its core technology started as a novelty item: smart dog collars.</p>
<p>In the Media Laboratory class MAS 834 (Tangible Interfaces), Liang, Gips, and Noah Paessel SM ’05 created dog collars equipped with RFID technology and accelerometers. These tracked a dog’s movement, communicated with smart collars worn by other dogs, and pushed that data online. Owners could log on to a social media site to check their pets’ exercise levels, interactions, and compare stats with other pets.</p>
<p>“It was a bit tongue-in-cheek,” Gips admits. But the students soon found themselves presenting a prototype to hundreds at human-computer interaction conference in Portland, Oregon —&nbsp;where it garnered significant attention.</p>
<p>With help from Media Lab entrepreneurial advisors and MIT’s Venture Mentoring Service, the students launched SNIF Labs (an acronym for “Social Networking in Fur”) in 2008 and began selling the collars. But after that year’s financial collapse, “Luxury pet products weren’t exactly selling,” Gips says.</p>
<p>When a researcher requested the technology to monitor health care staff, however, the startup decided to get a clean start in the health care industry, “which they say is recession-proof,” Gips says.</p>
<p>And after learning about WHO’s hand-hygiene guidelines, the team developed MedSense as an automated way to help administrators monitor hand-washing among staff. In 2011, researchers at Queen Mary Hospital in Hong Kong published a paper in the journal <em>BioMed Infectious Disease</em> that found MedSense was 88 percent accurate in monitoring staff compliance with the WHO’s guidelines.</p>
<p>Only then did the startup decide to commercialize this system. “We’re from MIT: We like publishing,” Gips says. “We needed to know we had something accurate.”</p>
<p><strong>Cutting waste</strong></p>
<p>Since then, General Sensing has raised more than $15 million in capital, and MedSense has been trialed in 10 hospitals across the United States and Europe, and in Saudi Arabia, Bahrain, and Qatar.</p>
<p>But the data MedSense collects on time spent near and around patients has proven to have another use: monitoring workflow.</p>
<p>As part of MedSense Look, the startup is developing small RFID tags that patients and staff wear, and ceiling-mounted transponders to track the tags, in real-time, as the wearers move through the “patient journey” —&nbsp;the waiting room, pre-procedure, procedure, and recovery room. General Sensing creates digital floor maps of an area being studied; patients and staff show up on the floor map as color-coded dots.</p>
<p>This allows the startup to gather data on patient wait times, treatment patterns, and other things that may reveal wasted time and resources. "Changing even seemingly simple workflows can require buy-in from a lot people. It helps to have quantifiable proof of the problem,” Gips says.</p>
<p>Another possible application is real-time location of surplus staff — particularly important when there’s a sudden influx of patients in one area of a hospital, Gips says. “Today, you have to call different units to see who has extra people on staff,” he explains. “With our system, we’re hoping you can log in and see where there are extra people that can come help. That waste can turn into a critical safety measure.”</p>
Innovation and Entrepreneurship (I&E), Startups, School of Architecture + Planning, Media Lab, Alumni/ae, Medical devices, Health care, Data, Computer science and technology, ResearchPrivacy challengeshttp://newsoffice.mit.edu/2015/identify-from-credit-card-metadata-0129
Analysis: It’s surprisingly easy to identify individuals from credit-card metadata.Thu, 29 Jan 2015 14:00:00 -0500Larry Hardesty | MIT News Officehttp://newsoffice.mit.edu/2015/identify-from-credit-card-metadata-0129<p>In this week’s issue of the journal <em>Science</em>, MIT researchers report that just four fairly vague pieces of information — the dates and locations of four purchases — are enough to identify 90 percent of the people in a data set recording three months of credit-card transactions by 1.1 million users.</p>
<p>When the researchers also considered coarse-grained information about the prices of purchases, just three data points were enough to identify an even larger percentage of people in the data set. That means that someone with copies of just three of your recent receipts — or one receipt, one Instagram photo of you having coffee with friends, and one tweet about the phone you just bought — would have a 94 percent chance of extracting your credit card records from those of a million other people. This is true, the researchers say, even in cases where no one in the data set is identified by name, address, credit card number, or anything else that we typically think of as personal information.</p>
<p>The paper comes roughly two years after an earlier analysis of mobile-phone records that yielded very <a href="http://newsoffice.mit.edu/2013/how-hard-it-de-anonymize-cellphone-data">similar results</a>.</p>
<p>“If we show it with a couple of data sets, then it’s more likely to be true in general,” says Yves-Alexandre de Montjoye, an MIT graduate student in media arts and sciences who is first author on both papers. “Honestly, I could imagine reasons why credit-card metadata would differ or would be equivalent to mobility data.”</p>
<p>De Montjoye is joined on the new paper by his advisor, Alex “Sandy” Pentland, the Toshiba Professor of Media Arts and Science; Vivek Singh, a former postdoc in Pentland’s group who is now an assistant professor at Rutgers University; and Laura Radaelli, a postdoc at Tel Aviv University.</p>
<p>The data set the researchers analyzed included the names and locations of the shops at which purchases took place, the days on which they took place, and the purchase amounts. Purchases made with the same credit card were all tagged with the same random identification number.</p>
<p>For each identification number — each customer in the data set — the researchers selected purchases at random, then determined how many other customers’ purchase histories contained the same data points. In separate analyses, the researchers varied the number of data points per customer from two to five. Without price information, two data points were still sufficient to identify more than 40 percent of the people in the data set. At the other extreme, five points with price information was enough to identify almost everyone.</p>
<p>The researchers characterized price very coarsely, treating all prices that fell within a few fixed ranges as functionally equivalent. So, for instance, a purchase of $20 at some store on some day in one person’s history would count as a match with a purchase of $40 by someone else at the same store on the same day, since both purchases fell within the range $16 to $49. This was an attempt to represent the uncertainty of someone estimating purchase amounts from secondary information, such as an Instagram photo of the food on someone’s plate. The limits of each range were based on a fixed percentage of its median value: The range $16 to $49, for instance, is the median value of purchases ($32.50) plus or minus 50 percent, rounded to the nearest dollar.</p>
<p>Preserving anonymity in large data sets is a pressing concern because public and private entities alike see aggregated digital data as a source of novel insights. Retailers studying anonymized credit-card histories could certainly learn something about the tastes of their customers, but economists might also learn something about the relationship of, say, inflation or consumer spending to other economic factors.</p>
<p>So the MIT researchers also examined the effects of coarsening the data — intentionally making it less precise, in the hope of preserving privacy while still enabling useful analysis. That makes identifying individuals more difficult, but not at a very encouraging rate. Even if the data set characterized each purchase as having taken place sometime in the span of a week at one of 150 stores in the same general areas, four purchases (with 50 percent uncertainty about price) would still be enough to identify more than 70 percent of users.</p>
<p>Nonetheless, de Montjoye and Pentland remain adamant that socially beneficial uses of big data should be pursued. “Sandy and I do really believe that this data has great potential and should be used,” de Montjoye says. “We, however, need to be aware and account for the risks of re-identification.”</p>
<p>In separate work, de Montjoye, Pentland, and other members of Pentland’s group have begun developing a <a href="http://newsoffice.mit.edu/2014/own-your-own-data-0709">system</a> that would enable people to store the data generated by their mobile devices on secure servers of their own choosing. Researchers looking for useful patterns in aggregate data would send queries through the system, which would return only the pertinent data — such as, for instance, the average amount spent on gasoline during different time periods.</p>
Yves-Alexandre de MontjoyeMedia Lab, School of Architecture + Planning, Research, Data, Big data, PrivacyTelling stories using computer sciencehttp://newsoffice.mit.edu/2015/profile-senior-shannon-kao-0122
Senior Shannon Kao’s knack for storytelling informs her research in computer graphics. Thu, 22 Jan 2015 00:00:00 -0500Julia Sklar | MIT News correspondenthttp://newsoffice.mit.edu/2015/profile-senior-shannon-kao-0122<p>For MIT senior Shannon Kao, expert storytelling is essential, even — if not especially — when it comes to coding. The computer science major relies on narrative everywhere from her science fiction writing to her research on educational computer games at the MIT Media Lab — and it all stems from a childhood replete with books.</p>
<p>Kao grew up in Michigan and then China, where her mother, who was her school’s librarian, exposed her and her two younger brothers, from an early age, to everything from picture books to hefty novels.&nbsp;</p>
<p>“Instead of hanging out, we would all just grab a book and sit in our living room and read,” Kao says with a laugh.</p>
<p>After two semesters of organic chemistry doused her interest in medical school, Kao stumbled on course 6.01 (Introduction to Electrical Engineering and Computer Science) and immediately saw computer science as a way to use her love of math to build something tangible and interactive. The summer after her freshman year, she took on a research position with the Affective Computing Group at the Media Lab, where she worked on a free app called StoryScape.</p>
<p>The program, geared toward families with developmentally challenged children, lets users drag and drop animated characters and illustrations, from a gallery Kao helped build, onto a page where users can write original stories, and can then share those stories with others. The animated characters can react to stimuli in users’ environments, such as loud noises, or the shaking of the phone or tablet on which the app is installed.</p>
<p>Working on StoryScape’s graphic gallery was Kao’s first experience in a computer science lab — her first research position at MIT was in a biology lab — but even though she had very little expertise in the field at that point, the new research setting immediately felt natural.</p>
<p>“It’s important that there’s a strong story underneath something, and the rest will follow,” she says.</p>
<p><strong>Modeling for beginners</strong></p>
<p>Kao has since expanded her research to MIT’s Scheller Teacher Education Program (STEP), which develops learning technology. For the past 2 1/2 years, she has worked with software developer Daniel Wendel, a research associate in MIT’s Department of Urban Studies and Planning, on a project called StarLogo. It teaches students with no computer science background how to build a program for modeling decentralized systems, like traffic jams.</p>
<p>StarLogo’s accessibility to inexperienced coders hinges on a system of blocks-based programming, which works like virtual LEGO bricks: To build a program, rather than writing out lines of code using individual symbols and numbers, users drag and drop ready-made blocks of text that code for 3-D graphics; only certain combinations of text blocks create a functioning program. The process of building a modeling program this way is a lot like building a story — it needs a coherent beginning, middle, and end, or else it won’t function. Kao’s role has been to develop the 3-D graphics that the blocks code for.</p>
<p>“[Blocks] make programming more intuitive for people who don’t necessarily have the background,” she says.&nbsp;</p>
<p>Kao has helped run several workshops to make ongoing improvements to StarLogo. STEP invites in parents and children with no programming background to complete a series of challenges; the researchers then ask for feedback on usability. Some of the biggest issues the team has encountered are with interfaces that control zooming and scrolling. After each workshop, it’s back to the lab, where Kao and her colleagues whittle away at a list of tweaks in preparation for the next workshop and set of feedback.&nbsp;</p>
<p><strong>From mindless doodles to an aesthetic sensibility</strong></p>
<p>Kao’s interest in telling stories through design and graphics started as mindless doodles in class, but soon grew into full-on illustrations that she later learned to turn into animations, using her computer science skills to bring her art to life. Looking at the illustrations, it’s easy to see that some of her inspiration comes from Japanese animator Hayao Miyazaki, but Kao cites the 2007 Disney film “Ratatouille” as her favorite animated movie.&nbsp;&nbsp;</p>
<p>“I feel like part of my interest in art is that I was just in this constant stream of picture books and young adult books that I would read regardless of what age I was,” she says. “I still really enjoy some picture books.”</p>
<p>Kao also works with written narrative: As the literature editor of <a href="http://runemag.mit.edu/index.php" target="_blank"><em>Rune</em></a>, MIT’s literary magazine, during her sophomore year, she was responsible for vetting incoming submissions. For the past two years, she has focused more on her own writing, winning MIT’s Ilona Karmel Prize for Science Fiction in 2013 and 2014.&nbsp;</p>
<p>Despite her passions in art and literature, Kao’s occupational focus remains with computer science, but always with her hobbies and upbringing as her guideposts.</p>
<p>“Computer graphics is that in-between space,” she says. “You need to have some kind of aesthetic sensibility, since the whole point is still to tell a story, but you’re using computer science and math to do that.”&nbsp;</p>
MIT senior Shannon KaoStudents, Undergraduate, Profile, Electrical Engineering & Computer Science (eecs), School of Engineering, Media Lab, Urban studies and planning, School of Architecture + Planning, Computer science and technology, Graphics, Literature, languages and writingMIT team enlarges brain samples, making them easier to imagehttp://newsoffice.mit.edu/2015/enlarged-brain-samples-easier-to-image-0115
New technique enables nanoscale-resolution microscopy of large biological specimens.Thu, 15 Jan 2015 14:00:00 -0500Anne Trafton | MIT News Officehttp://newsoffice.mit.edu/2015/enlarged-brain-samples-easier-to-image-0115<p>Beginning with the invention of the first microscope in the late 1500s, scientists have been trying to peer into preserved cells and tissues with ever-greater magnification. The latest generation of so-called “super-resolution” microscopes can see inside cells with resolution better than 250 nanometers.</p>
<p>A team of researchers from MIT has now taken a novel approach to gaining such high-resolution images: Instead of making their microscopes more powerful, they have discovered a method that enlarges tissue samples by embedding them in a polymer that swells when water is added. This allows specimens to be physically magnified, and then imaged at a much higher resolution.</p>
<p>This technique, which uses inexpensive, commercially available chemicals and microscopes commonly found in research labs, should give many more scientists access to super-resolution imaging, the researchers say.</p>
<p>“Instead of acquiring a new microscope to take images with nanoscale resolution, you can take the images on a regular microscope. You physically make the sample bigger, rather than trying to magnify the rays of light that are emitted by the sample,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT.</p>
<p>Boyden is the senior author of <a href="http://www.sciencemag.org/content/early/2015/01/14/science.1260088.abstract" target="_blank">a paper</a> describing the new method in the Jan. 15 online edition of <em>Science</em>. Lead authors of the paper are graduate students Fei Chen and Paul Tillberg.</p>
<p><strong>Physical magnification</strong></p>
<p>Most microscopes work by using lenses to focus light emitted from a sample into a magnified image. However, this approach has a fundamental limit known as the diffraction limit, which means that it can’t be used to visualize objects much smaller than the wavelength of the light being used. For example, if you are using blue-green light with a wavelength of 500 nanometers, you can’t see anything smaller than 250 nanometers.</p>
<p>“Unfortunately, in biology that’s right where things get interesting,” says Boyden, who is a member of MIT’s Media Lab and McGovern Institute for Brain Research. Protein complexes, molecules that transport payloads in and out of cells, and other cellular activities are all organized at the nanoscale.</p>
<div class="cms-placeholder-content-video"></div>
<p>Scientists have come up with some “really clever tricks” to overcome this limitation, Boyden says. However, these super-resolution techniques work best with small, thin samples, and take a long time to image large samples. “If you want to map the brain, or understand how cancer cells are organized in a metastasizing tumor, or how immune cells are configured in an autoimmune attack, you have to look at a large piece of tissue with nanoscale precision,” he says.</p>
<p>To achieve this, the MIT team focused its attention on the sample rather than the microscope. Their idea was to make specimens easier to image at high resolution by embedding them in an expandable polymer gel made of polyacrylate, a very absorbent material commonly found in diapers.</p>
<p>Before enlarging the tissue, the researchers first label the cell components or proteins that they want to examine, using an antibody that binds to the chosen targets. This antibody is linked to a fluorescent dye, as well as a chemical anchor that can attach the dye to the polyacrylate chain.</p>
<p>Once the tissue is labeled, the researchers add the precursor to the polyacrylate gel and heat it to form the gel. They then digest the proteins that hold the specimen together, allowing it to expand uniformly. The specimen is then washed in salt-free water to induce a 100-fold expansion in volume. Even though the proteins have been broken apart, the original location of each fluorescent label stays the same relative to the overall structure of the tissue because it is anchored to the polyacrylate gel.</p>
<p>“What you’re left with is a three-dimensional, fluorescent cast of the original material. And the cast itself is swollen, unimpeded by the original biological structure,” Tillberg says.</p>
<p>The MIT team imaged this “cast” with commercially available confocal microscopes, commonly used for fluorescent imaging but usually limited to a resolution of hundreds of nanometers. With their enlarged samples, the researchers achieved resolution down to 70 nanometers. “The expansion microscopy process … should be compatible with many existing microscope designs and systems already in laboratories,” Chen adds.</p>
<p><strong>Large tissue samples</strong></p>
<p>Using this technique, the MIT team was able to image a section of brain tissue 500 by 200 by 100 microns with a standard confocal microscope. Imaging such large samples would not be feasible with other super-resolution techniques, which require minutes to image a tissue slice only 1 micron thick and are limited in their ability to image large samples by optical scattering and other aberrations.</p>
<p>“The exciting part is that&nbsp;this approach can acquire data at the same high speed per pixel as conventional microscopy, contrary to most other methods that beat the diffraction limit for microscopy, which can be 1,000 times slower per pixel,” says George Church, a professor of genetics at Harvard Medical School who was not part of the research team.</p>
<p>“The other methods currently have better resolution, but are harder to use, or slower,” Tillberg says. “The benefits of our method are the ease of use and, more importantly, compatibility with large volumes, which is challenging with existing technologies.”</p>
<p>The researchers envision that this technology could be very useful to scientists trying to image brain cells and map how they connect to each other across large regions.</p>
<p>“There are lots of biological questions where you have to understand a large structure,” Boyden says. “Especially for the brain, you have to be able to image a large volume of tissue, but also to see where all the nanoscale components are.”</p>
<p>While Boyden’s team is focused on the brain, other possible applications for this technique include studying tumor metastasis and angiogenesis (growth of blood vessels to nourish a tumor), or visualizing how immune cells attack specific organs during autoimmune disease.</p>
<p>The research was funded by the National Institutes of Health, the New York Stem Cell Foundation, Jeremy and Joyce Wertheimer, the National Science Foundation, and the Fannie and John Hertz Foundation.</p>
Using a new technique that allows them to enlarge brain tissue, MIT scientists created these images of neurons in the hippocampus.Brain and cognitive sciences, Biological engineering, Media Lab, Research, School of Engineering, School of Science, Neuroscience, Nanoscience and nanotechnology, National Institutes of Health (NIH)New horizons for self-assembling materialshttp://newsoffice.mit.edu/2014/3-d-printed-materials-curve-stretch-1218
3-D-printable materials deform to change surface area, enabling curvature rather than rigid folding.Thu, 18 Dec 2014 15:00:00 -0500Larry Hardesty | MIT News Officehttp://newsoffice.mit.edu/2014/3-d-printed-materials-curve-stretch-1218<p>Today’s 3-D printers, in which devices rather like inkjet-printer nozzles deposit materials in layers to build up physical objects, are a great tool for designers building prototypes or small companies with limited product runs.</p>
<p>But they take a long time to produce objects that are more than a couple of centimeters in height, and many researchers believe that they’ll realize their full potential only when they can generate sheets of patterned materials that will automatically warp themselves into larger, more complex shapes.</p>
<p>In the latest issue of the journal <em>Scientific Reports</em>, a team of researchers at MIT and the companies Autodesk and Stratasys describe a new process for designing and manufacturing such “programmable matter” that could make it more versatile. Whereas much prior work — <a href="newsoffice.mit.edu:2014:bake-your-own-robot-0530">at MIT</a> and elsewhere — concentrated on materials that self-fold, the new procedure yields materials that also self-stretch.</p>
<p>“If a structure is going to change, and curvature appears — like going from a flat domain to something that has effective curvature, such as a mountain — the area is going to change,” says Dan Raviv, a postdoc at the MIT Media Laboratory and lead author on the new paper. “There’s going to be stretching. Until now, people just considered bending, which leaves the area and lengths the same. Or they did some stretching, but without the ability to control or pre-program it. Now we need to develop new sets of tools to do both.”</p>
<p>Raviv is a member of the Media Lab’s Camera Culture group, which is led by associate professor of media arts and sciences Ramesh Raskar and specializes in computational photography. That may sound like a far cry from self-folding materials, but Raviv says that his work focused on geometric interpretations of visual data. The mathematical framework for mapping points of color in an image onto multiple hypothetical 3-D models of the underlying scene is very similar to the framework for mapping points on the surface of a self-deforming material onto their final locations.</p>
<p><strong>Getting physical</strong></p>
<p>Raviv, Raskar, graduate student Achuta Kadambi, and Boxin Shi, another postdoc in Raskar’s group, had been invited to collaborate on the problem of self-assembling printable materials at a presentation last year by Skylar Tibbits, a research scientist in MIT's Department of Architecture who heads the MIT Self-Assembly Lab. Tibbits favors the term “4-D printing” for his lab’s work, where the fourth dimension is the time it takes devices to self-assemble.</p>
<p>The Camera Culture researchers developed algorithms that could determine how much parts of an arbitrary 3-D object needed to stretch to accommodate its deformation into another shape. But figuring out how to physically realize that deformation fell to Tibbits’ group.</p>
<p>Much prior work on self-folding materials has involved laminates: A 3-D printer or laser cutter would produce patterned sheets of different materials, which researchers would then stick together by hand. One of the materials would bend when heated or immersed in water; the other would hold some parts of the sheet rigid.</p>
<p>But Tibbits was committed to the idea of a truly printable material — one that was ready to go when it came out of the printer. He had been collaborating with researchers at Stratasys, a company that manufactures 3-D printers, who had developed a polymer that expands when it absorbs water. Stratasys printers can deposit multiple polymers in each layer of a 3-D-printed object, and Tibbits’ group had come up with designs that used combinations of polymers to produce materials that self-folded upon immersion.</p>
<p><strong>Home stretch</strong></p>
<p>Together with Carrie McKnelly, a graduate student, and Athina Papadopoulou, a research specialist — both in the Department of Architecture — Tibbits came up with a simple but elegant design for a “linear actuator,” or a component that would enable segments of his printed materials to stretch. In the component, two polymer disks are connected to each other by two bowed strips of a composite polymer. In profile, the component looks like the insignia of the comic-book superhero the Green Lantern.</p>
<p>The inner edges of the bowed strips are made from the expanding polymer, and when they absorb water, the strips straighten out.</p>
<p>Tibbits’ group experimented with the relative thicknesses of the expanding and rigid layers until they had established a relationship between the diameter of the ellipse produced by the bowed strips and the degree of expansion. They also determined how to striate the disks at the ends of the strips with indentations and layers of expanding polymer so that they would bend in the right directions to accommodate the curvature of the material’s expanding regions.</p>
<p>Once they’d established the performance characteristics of their components, the MIT researchers collaborated with colleagues at Autodesk, a manufacturer of computer-aided-design software, to develop a simulation program that would determine whether devices made from their design specifications would self-assemble as expected. The Autodesk researchers, like those from Stratasys, are co-authors on the new paper.</p>
<p>“It is unclear whether the strategy of 3-D printing followed by immersion in water is a technologically viable pathway to creating responsive architectures,” says Jennifer Lewis, a professor in Harvard University’s School of Engineering and Applied Sciences. “Nevertheless, this is an elegant example of 4-D printing.”</p>
Hot off the 3-D printer, a polymer strand dropped in water folds itself into the MIT logo.Research, printing, 3-D, School of Architecture + Planning, Media Lab, ArchitectureHow information moves between cultureshttp://newsoffice.mit.edu/2014/network-maps-global-fame-different-language-speakers-1216
Networks that map strength of connections between languages predict global influence of their speakers.Tue, 16 Dec 2014 16:00:00 -0500Larry Hardesty | MIT News Officehttp://newsoffice.mit.edu/2014/network-maps-global-fame-different-language-speakers-1216<p>By analyzing data on multilingual Twitter users and Wikipedia editors and on 30 years’ worth of book translations in 150 countries, researchers at MIT, Harvard University, Northeastern University, and Aix Marseille University have developed network maps that they say represent the strength of the cultural connections between speakers of different languages.</p>
<p>This week, in the <em>Proceedings of the National Academy of Sciences</em>, they show that a language’s centrality in their network — as defined by both the number and the strength of its connections — better predicts the global fame of its speakers than either the population or the wealth of the countries in which it is spoken.</p>
<p>“The network of languages that are being translated is an aggregation of the social network of the planet,” says Cesar Hidalgo, the Asahi Broadcasting Corporation Career Development Assistant Professor of Media Arts and Sciences and senior author on the paper. “Not everybody shares a language with everyone else, and therefore the global social network is structured through these circuitous paths in which people in some language groups are by definition way more central than others. That gives them a disproportionate power and responsibility. On the one hand, they have a much easier time disseminating the content that they produce. On the other hand, as information flows through people, it gets colored by the ideas and the biases that those people have.”</p>
<p><strong>Plotting polyglots</strong></p>
<p>Hidalgo and his students Shahar Ronen — first author on the new paper — and Kevin Hu, together with Harvard’s Steven Pinker, Bruno Gonçalves of Aix Marseille University, and Alessandro Vespignani of Northeastern, included a given Twitter user in their data set if he or she had at least three sentence-long tweets in a language other than his or her primary language. That left them with 17 million of Twitter’s roughly 280 million users. They had similar thresholds for Wikipedia users who had edited entries in more than one language, which gave them a data set of 2.2 million Wikipedia editors.</p>
<p>In both cases, the strength of the connection between any two languages was determined by the number of users who had demonstrated facility with both of them.</p>
<p>The translation data came from UNESCO’s Index Translationum, which catalogues 2.2 million book translations, in more than 1,000 languages, published between 1979 and 2011. There, the strength of the connection between two languages was determined by the number of translations between them.</p>
<p>The researchers also used two different definitions of global fame. One was the measure that Hidalgo’s group had used in its earlier <a href="http://pantheon.media.mit.edu">Pantheon</a> project, which also looked at global cultural production. Pantheon had identified everyone with (at the time) Wikipedia entries in at least 26 languages — 11,340 people in all.</p>
<p>The other fame measure was inclusion among the 4,002 people profiled in the book “Human Accomplishment: The Pursuit of Excellence in the Arts and Sciences, 800 BC to 1950,” by the American political scientist Charles Murray. Murray’s list was based on the frequency with which people’s names were mentioned in 167 reference texts — encyclopedias and historical surveys — published worldwide.</p>
<p><strong>Relative correlatives</strong></p>
<p>There were, naturally, differences between the networks produced from the separate data sets and their correlations with the two fame measures. For instance, in the network produced from Wikipedia data, German is much more central than Spanish; in the Twitter network, the opposite is true.</p>
<p>Similarly, the network produced from UNESCO’s translation data correlated better with Murray’s fame index, which, as the subtitle of his book indicates, concentrated on science and the arts. The Wikipedia and Twitter networks correlated better with the Pantheon index, which included many more pop-culture figures.</p>
<p>But with both fame measures, at least one of the networks, taken in isolation, provided better correlation than the number of speakers of a language and the GDPs of the countries in which it is spoken. And when the networks were combined with population and income data, the correlations were higher still.</p>
<p>“We have to be very clear about what we’re talking about,” Hidalgo says. “This paper is not about global languages. All three networks are representative of elites. But those elites are the ones that drive the transfer of information across cultures.”</p>
<p>"This thought-provoking paper expands the intersection between big-data network science and linguistics," writes Kenneth Wachter, a professor of demography and statistics at the University of California at Berkeley. "It offers reproducible criteria for a language to serve as a global hub and is likely to stimulate many alternative perspectives."</p>
A network diagram representing the strength of the cultural connections between language speakers, based on the number of book translations between languages.Research, Language, Twitter, Social media, Big data, School of Architecture + Planning, Media Lab“Moneyball for business”http://newsoffice.mit.edu/2014/behavioral-analytics-moneyball-for-business-1114
Startup’s behavioral analytics on employees uncover ways to increase workplace productivity, satisfaction. Fri, 14 Nov 2014 00:00:04 -0500Rob Matheson | MIT News Officehttp://newsoffice.mit.edu/2014/behavioral-analytics-moneyball-for-business-1114<p>Michael Lewis’ 2003 book “Moneyball” — and the 2011 film adaptation — detailed how the Oakland Athletics used analytics, primarily derived from players’ on-base percentages, to assemble a competitive team despite financial constraints.</p>
<p>What if you could bring that type of analytics to the workplace? Now MIT spinout Sociometric Solutions is developing a system it calls “moneyball for business,” which uses sensor identification badges and analytics tools to track behavioral data on employees, providing insights that can increase productivity.</p>
<p>“‘Moneyball’ is putting numbers on behavior and using that data to build a baseball team. But what if I could say, ‘Here’s how you need to talk to customers, here’s how people need to collaborate with each other, and here are the things that lead to outcomes such as turnover, sales, and job satisfaction,’” says Ben Waber PhD ’11, co-founder and president of Sociometric. “Individuals can use that data to boost performance, and a company can use that to help set up an environment where everybody’s going to succeed.”</p>
<p>Sociometric’s system — based on years of MIT research —&nbsp;consists of employee identification badges with built-in Bluetooth sensors that track location and which way someone’s facing. Other sensors show when employees lean in — signaling, for instance, engagement in a conversation — and accelerometers can track their speed (sensing bursts of lethargy and vigor). A built-in microphone records how often, fast, and loud individuals talk, as well as tone of voice (but not actual conversation). Increased speed and higher voice tones, for example, are strong indicators of high stress levels.</p>
<p>Readers placed around an office collect the data and push it to the cloud. (Individuals have access to their personal data via a Web dashboard or smartphone, but companies are only given anonymous, aggregated results of patterns and trends in behavior.) By combining this information with employee-performance data from surveys, interviews, and objective performance metrics, Sociometric can pinpoint areas where management can build more productive offices — in ways as surprising as providing larger lunch tables or moving coffee stations to increase interaction.</p>
<p>In one of its earliest studies, with a Bank of America call center, for instance, Sociometric tracked co-workers for three months. They predicted that allowing certain employees to take breaks together — to let off steam or share tips about customer service — would improve productivity. Sure enough, when the bank instituted the changes, Sociometric measured a 15 to 20 percent bump in productivity, a 19 percent drop in stress levels, and decreased turnover, from 40 to 12 percent.</p>
<p>All things considered, Waber says, these and other solutions produce a 20 percent rise, on average, in productivity and employee satisfaction, and a similar decrease in turnover.</p>
<p>More than 20 retail, sales, and consulting firms have used Sociometric’s system. Additionally, more than 60 research organizations across the globe&nbsp;are using the system &nbsp;on management, social psychology, medicine, computer science, and physical therapy, among other things.</p>
<p>Sociometric’s MIT co-founders, and co-inventors of its technology, include Alexander “Sandy” Pentland, the Toshiba Professor of Media Arts and Science, who serves as scientific advisor; Daniel Olguin Olguin SM ’07, PhD ’11, who is chief operating officer; and Taemie Kim PhD ’11, who is chief scientist.</p>
<p><strong>Natural transition for a company</strong></p>
<p>Sociometric traces its origins to 2007, when students in Pentland’s Human Dynamics Group, including Waber, were approached to use behavioral analytics for a management study.</p>
<p>Peter Gloor, a researcher in the Center for Collective Intelligence, was using surveys of employees at a German bank, where the marketing division was split into four teams located across 10 rooms on two floors. The bank wanted to know how this physical layout affected productivity and job satisfaction.</p>
<p>Waber, Pentland, and other researchers developed and deployed 22 prototypes of Sociometric badges at the bank for a month, registering when two wearers were talking to one another and for how long.</p>
<p>Accumulating more than 2,000 hours of data — and comparing that data with survey results — they predicted, with 60 percent accuracy, that close-knit groups of workers who spoke frequently with one another were more satisfied and got more work done more efficiently. They also found evidence of communication overload, where high volumes of email —&nbsp;due to lack of face-to-face interaction —&nbsp;were causing some employees difficulty in concentrating, and decreasing their job satisfaction.</p>
<p>Armed with these results, the bank rearranged its layout to increase the proximity of the close-knit employees, and dropped from four teams to three to encourage stronger interaction.</p>
<p>Seeing potential, Waber, Pentland, and others started trialing the system with the Media Lab’s various corporate sponsors. From these early experiences — and with some advice from MIT’s Venture Mentoring Service — the startup refined the system and learned how to pitch it to potential customers and deploy it efficiently. As word spread, companies started offering to pay.</p>
<p>“We thought, ‘If we could make an impact doing something so simple [as measuring face-to-face interactions], imagine what we could do with more sophisticated metrics,’” Waber says.</p>
<p>From there, Sociometric built out its analytics tools primarily through dozens of research collaborations. A study with Cornell University in 2013, for instance, allowed the startup to prove that it could accurately predict high levels of cortisol in someone’s saliva — an indicator of high stress — based on their tone of voice. “That suddenly became a metric we could use,” Waber says.</p>
<p><strong>Longer lunch tables, better outcomes</strong></p>
<p>Over the years, Sociometric has had some surprising findings. Waber points to his firm’s work with a major online travel company. While looking at the employees’ lunchtime interactions, they discovered one of the most predictive measures of good performance was the number of people an employee ate with — the more, the merrier.&nbsp;</p>
<p>But they saw that in the cafeteria, certain people only sat with three other people (at four-seat tables), while others sat with 11 people (at 12-seat tables). Those who sat at larger tables were 36 percent more productive during the week. When the company initiated layoffs during the study, the employees who sat at larger tables also had 30 percent lower stress levels than those who sat at smaller tables.</p>
<p>The idea is that these employees, Waber says, had been able to accumulate larger networks, knew what others were working on, and were more likely to reach out to specific people with questions and concerns. Surprisingly, after this finding went public, some technology firms began installing larger cafeteria tables, Waber says.</p>
<p>“It’s crazy that something as trivial as physical space, as the size of the lunch table, could affect productivity,” Waber says. “The CEO obviously wasn’t thinking about that, but those are the biggest drivers of how people communicate with one another.”</p>
<p>Waber says many of Sociometric’s results point to a need for more social interaction in the workplace. In sales, for instance, communication with colleagues has proven more predictive of sales outcomes than the ability to talk with customers, Waber says.</p>
<p>“If someone figures out a really good way to pitch to customers, you talk to them and learn how to make that pitch, which makes things more efficient,” he says. “Even if you’re competing on performance metrics, if you know each other well enough, you’ll share. That’s exactly what we see.”</p>
<p>Some major companies such as Google and Facebook, Waber says, are already promoting socializing by, for instance, building campuses, where all workers come to collaborate. “But the next step for Sociometric is to take everyone else to that same level of collaboration,” he says.</p>
Research, Innovation and Entrepreneurship (I&E), Startups, Alumni/ae, Media Lab, Analytics, Big data, Data, Wearable sensors, Venture Mentoring Service, Management, Behavioral economicsOutside-the-box thinkerhttp://newsoffice.mit.edu/2014/student-profile-nathan-spielberg-1103
Senior Nathan Spielberg uses 3-D printing to build everything from nanoscale chips to houses. Mon, 03 Nov 2014 00:00:03 -0500Julia Sklar | MIT News correspondenthttp://newsoffice.mit.edu/2014/student-profile-nathan-spielberg-1103<p>When an aspiring mechanical engineer on a budget wants a top-of-the-line guitar, what does he do? He makes it himself, of course.</p>
<p>At age 13, Nathan Spielberg — now an MIT senior — began building his first guitar, a process that consumed his attention for eight hours a day, every weekend, for 3 1/2 years. Reminiscing now, he calls it a full-time hobby, but it was also pure inspiration: Strumming away on what was once a mere block of wood, Spielberg grew to appreciate the potential to turn an abstract idea into a functional object.</p>
<p>As a mechanical engineering major at MIT, he has held onto this tenet, but turned his attention to a new means of production: <a href="http://newsoffice.mit.edu/2011/3d-printing-0914">3-D printing</a>. Until recently, Spielberg worked in the MIT Media Lab with Neri Oxman, the Sony Corporation Career Development Assistant Professor of Media Arts and Sciences, graduate&nbsp;students Steven Keating and John Klein, and other undergraduates. As part of the&nbsp;<a href="http://matter.media.mit.edu">Mediated Matter Group</a>, he focused on converting a robotic arm to a computer controlled arm, capable of printing projects, like houses.</p>
<p>As Spielberg sees it, 3-D printing has two extremes: At one end is rapid prototyping, which allows researchers to design, print, and experiment — and then design, print, and experiment again — many times faster than traditional manufacturing. On the other end is express, large-scale construction of single objects. That’s where his research with Oxman, Keating, and the others&nbsp;comes in.</p>
<p><strong>Outside the box</strong></p>
<p>Ordinarily, 3-D printing occurs inside a box — limiting the size of printable objects to that of the printer’s housing. But in the Media Lab, Spielberg and his colleagues&nbsp;thought outside the box: They researched and implemented methods for controlling a large robotic arm&nbsp;— out of that from a boom truck, used to fixed power lines — that would be able to maneuver back and forth, printing objects as large as walls, layer by layer.</p>
<p>It’s analogous to how an office printer’s cartridge runs back and forth, but on a much grander scale: An aim of the group's research was not only to print walls, but to do so with considerable mobility, enabling immediate transport to a construction site, streamlining delivery and increasing construction efficiency. &nbsp;&nbsp;&nbsp;&nbsp;</p>
<p>The printed object, in this case, is actually a mold made of insulation that becomes a full-on wall once filled with concrete. Being made of insulation, however, the molds have their own functionality beyond providing the external shape for a wall: They don’t have to be removed once the concrete is poured, since they can act as embedded insulation for the house.</p>
<p>Because of the scale of the work, Spielberg and Keating encountered some obstacles. For instance, Spielberg says, “You need really precise movement on the robotic arm end to get each layer exactly straight, and to build something that looks like a functional house, which is really hard to do with a construction crane. If you’ve ever seen someone working on the power lines, they’re usually swaying in the wind. There are a lot of inherent engineering and physics problems with this that we’re trying to solve.”</p>
<div class="cms-placeholder-content-slideshow"></div>
<p><strong>From walls to nanoscale chips</strong></p>
<p>This fall, Spielberg jumped to the other end of the 3-D printing spectrum, moving from walls to nanoscale fluidic chips. He is now working in the <a href="http://mechanosynthesis.mit.edu">lab</a> of A. John Hart, the Mitsui Career Development Associate Professor of Mechanical Engineering, to manufacture what’s known as a “lab on a chip.”</p>
<p>Currently, when a doctor wants to run a series of blood tests on a patient, he or she collects several vials of blood and sends them to a hospital laboratory for dozens of individual tests. Several hours or days later, the lab returns the results.</p>
<p>Among other functions, a lab on a chip can theoretically take a minuscule sample of blood, run all of the required tests at once inside tiny channels embedded in the chip, and produce nearly instantaneous results. Spielberg even sees the technology as a potential tool in military environments.</p>
<p>“It’s totally a convenience thing,” he says. “Imagine if you were in the military and you’re trying to screen for some disease, but you don’t have a lab with you. You can pull out this device, take a quick sample of blood, get almost instant feedback in a super-small form, and be on your way.”</p>
<p>Once again, Spielberg’s role in the lab is with optimizing the 3-D printer that makes the device. The current method for creating labs on a chip is labor-intensive, and, much like manufacturing a standard computer chip, starts with creating silicon wafers, which act as a template for the final product.</p>
<p>Even though he is only a few months into his new lab position with Hart, Spielberg is already working toward eliminating this clunky process, enabling the same type of efficient manufacturing he tackled at the Media Lab.</p>
<p><strong>An early introduction to research</strong></p>
<p>Growing up in Louisville, Ky., Spielberg first encountered the power of research when his 11-year-old brother was diagnosed with dystonia, a neurological disorder characterized by uncontrollable muscle contractions.&nbsp;</p>
<p>“He started limping, and progressively couldn’t walk to the point where he was bedridden,” Spielberg says. “Throughout that process, it was really hard because there’s not a lot you can do. One of the hardest things is feeling helpless.”</p>
<p>But Spielberg broke through the helplessness: Piggybacking on the fundraising bracelet trend of a few years ago, he sold silicone bracelets, raising $60,000 to fund research on his brother’s disease. Then, as a high-school student, Spielberg became involved in some of the research his fundraising supported; the experience provided a perspective unlike what he already knew from designing guitars.</p>
<p>“I didn’t do a ton of hands-on stuff,” he says, “but it was really interesting to learn about how they were trying to solve this problem from a biological standpoint, because I was more used to solving problems from a mechanical standpoint.”</p>
<p>The story ultimately had a happy ending: Spielberg’s brother enrolled in a clinical trial, receiving pacemaker implants in his chest that could intercept aberrant signals from his brain before they reached his muscles. His brother can now walk almost perfectly, and can even play basketball with his friends.</p>
<p>“That was, and is, so amazing to me — that research has the potential to totally give a person their life back,” Spielberg says.</p>
<p>And as for the homemade guitar? Spielberg says he left it in Louisville for safekeeping. Nonetheless, he’s a member of a recently formed rock band with a fellow mechanical engineering major and two computer science majors, keeping music and science tied together in his life.</p>
<p>“There’s something about art and music that offer a form of self-expression that’s sometimes hard to attain in other forms of work,” Spielberg says.&nbsp;</p>
School of Engineering, School of Architecture + Planning, Mechanical engineering, Media Lab, Students, Undergraduate, Profile, Design, Nanoscience and nanotechnologyNo more crying over spilled milkhttp://newsoffice.mit.edu/2014/breast-pump-hackathon-1017
“Make The Breast Pump Not Suck Hackathon” brings tech out of the bubble and into the bottle.Fri, 17 Oct 2014 00:00:06 -0400Cara Giaimo | MIT News correspondenthttp://newsoffice.mit.edu/2014/breast-pump-hackathon-1017<p>A few weekends ago, as some techies lined up to buy the new iPhone 6, others flocked to the MIT Media Lab to play with a different piece of technology — one that hasn’t seen an upgrade in decades.</p>
<p>Over the course of that weekend, some 150 engineers, designers, developers, and health care professionals — many of them unaffiliated with MIT — gathered on campus for the “Make The Breast Pump Not Suck Hackathon.” The brainchild of a group of Media Lab students and researchers who are also parents, the hackathon aimed to revamp the breast pump, an aspect of modern parenthood that is nearly as maligned as it is necessary.</p>
<p><strong>Fighting the “ugly blue machine”</strong></p>
<p>Breast milk is the original superfood, full of allergy-fighting antibodies and brain-building fatty acids. The American Academy of Pediatrics (AAP) advises that babies drink it exclusively, if possible, until they’re 6 months old.&nbsp;</p>
<p>Most American mothers return to work much earlier than that — which means that if they want to follow the AAP’s recommendation, they have to pump. An estimated 25 percent of new mothers pump daily, and by all accounts, the experience is no fun: “Instead of snuggling with the baby … we hand [you] this ugly blue machine, and it makes this mechanical sound, and it just sucks milk from your breasts like a milking machine,” says Robyn Churchill, a certified nurse midwife, mother of two, and member of the hackathon’s advisory board. “There’s just nothing about it that is like nursing.”</p>
<p>Organizers asked mothers nationwide how to improve the pumping experience, and were flooded with suggestions; hackathon participants then split into teams based on which problems they wanted to tackle. While many hackathons focus strictly on technological fixes, this one encouraged hackers to think innovatively about anything that might improve the pumping experience — everything from architectural changes, such as private pumping rooms in workplaces, to overhauls of maternity-leave policies.</p>
<p>Some groups — like “Team Batman,” which included engineers, designers, and health care professionals — took on the pump itself. “For new mothers, there’s very often a mindset to just acquiesce and evolve. … [Pumping] is just one more thing where they don’t complain,” says Team Batman leader Erin Freeburger, a mother and user experience designer at SimplyXML. But if asked, Freeburger says, most new mothers will have “the same simple requests.” Her original idea, a CamelBak-style backpack for pumping on the go, fulfilled several: It was discreet, quiet, portable, and hands-free.</p>
<p><strong>Totally tubular</strong></p>
<p>To turn this concept into reality, her team faced some engineering hurdles: Despite its name, Team Batman “couldn't figure out how to fight gravity,” Freeburger says, so the backpack evolved into a utility belt, with tubes that bring the expressed milk down into a storage receptacle.</p>
<p>But any pump parts that come into contact with milk have to be cleaned quickly, and tubes are hard to clean — already a common complaint. As one mother pointed out in an anonymous survey, “When you have to pump every two to three hours … the 10 minutes it takes to clean pump parts are precious.”</p>
<p>Two engineers on Team Batman, Don Blair and Ioannis Smanis, figured out a solution: They melted a new base one inch from the zippered end of a plastic baggie, turning it into a one-inch-wide “ZipTube.” “So if you close it, it's a tube, but when you unlock it, you can lay it flat and put it in the dishwasher,” Freeburger explains.</p>
<p><strong>Next steps</strong></p>
<p>After two days of brainstorming, designing, building, and tweaking, 10 teams presented to a panel of judges on Sunday night. First prize — $3,000 and a trip to California to talk with potential investors — went to Team Batman. Their original idea of a backpack had evolved into the Mighty Mom Utility Belt, a wearable pump outfitted with a quiet motor, easy-clean ZipTubes, and a sensor that makes the pump hands-free.</p>
<p>The members of Team Batman say they recognize that accepting this prize means making a promise. Many ideas born at hackathons never mature: Their creators have other jobs and commitments, and lose their enthusiasm along with their access to institutional support.</p>
<p>“Innovation and awareness are the two things that usually happen out of hackathons,” says Anya Burkart, a biological engineering graduate student at MIT who participated in the hackathon. “For this one, though, I really want real-world application. … We really do need change here.”</p>
<p>The “Make The Breast Pump Not Suck Hackathon” could be a first baby step.</p>
Innovation and Entrepreneurship (I&E), Media Lab, Hackathon, Health care, Invention, Students, Community, conferences and events, School of Architecture + PlanningSuperhuman visionhttp://newsoffice.mit.edu/2014/superhuman-vision-ramesh-raskar-1006
Mon, 06 Oct 2014 17:44:01 -0400Leda Zimmerman | MIT Spectrumhttp://newsoffice.mit.edu/2014/superhuman-vision-ramesh-raskar-1006<p>All through his childhood, Ramesh Raskar wished fervently for eyes in the back of his head. “I had the notion that the world did not exist if I wasn’t looking at it, so I would constantly turn around to see if it was there behind me.” Although this head-spinning habit faded during his teen years, Raskar never lost the desire to possess the widest possible field of vision.</p>
<p>Today, as director of the Camera Culture research group and associate professor of Media Arts and Sciences at the MIT Media Lab, Raskar is realizing his childhood fantasy, and then some. His inventions include a nanocamera that operates at the speed of light and do-it-yourself tools for medical imaging. His scientific mission? “I want to create not just a new kind of vision, but superhuman vision,” Raskar says.</p>
<p>He avoids research projects launched with a goal in mind, “because then you only come up with the same solutions as everyone else.” Discoveries tend to cascade from one area into another. For instance, Raskar’s novel computational methods for reducing motion blur in photography suggested new techniques for analyzing how light propagates. “We do matchmaking; what we do here can be used over there,” says Raskar.</p>
<p>Inspired by the famous microflash photograph of a bullet piercing an apple, created in 1964 by MIT professor and inventor Harold “Doc” Edgerton, Raskar realized, “I can do Edgerton millions of times faster.” This led to one of the Camera Culture group’s breakthrough inventions, femtophotography, a process for recording light in flight.</p>
<p>Manipulating photons into a packet resembling Edgerton’s bullet, Raskar and his team were able to “shoot” ultrashort laser pulses through a Coke bottle. Using a special camera to capture the action of these pulses at half a trillion frames per second with two-trillionths of a second exposure times, they captured moving images of light, complete with wave-like shadows lapping at the exterior of the bottle.</p>
<p>Femtophotography opened up additional avenues of inquiry, as Raskar pondered what other features of the world superfast imaging processes might reveal. He was particularly intrigued by scattered light, the kind in evidence when fog creates the visual equivalent of “noise.”</p>
<p>In one experiment, Raskar’s team concealed an object behind a wall, out of camera view. By firing super-short laser bursts onto a surface nearby, and taking millions of exposures of light bouncing like a pinball around the scene, the group rendered a picture of the hidden object. They had effectively created a camera that peers around corners, an invention that might someday help emergency responders safely investigate a dangerous environment.</p>
<p>Raskar’s objective of “making the invisible visible” extends as well to the human body. The Camera Culture group has developed a technique for taking pictures of the eye using cellphone attachments, spawning inexpensive, patient-managed vision and disease diagnostics. Conventional photography has evolved from time-consuming film development to instantaneous digital snaps, and Raskar believes “the same thing will happen to medical imaging.” His research group intends “to break all the rules and be at the forefront. I think we’ll get there in the next few years,” he says.</p>
<p>Ultimately, Raskar predicts, imaging will serve as a catalyst of transformation in all dimensions of human life — change that can’t come soon enough for him. “I hate ordinary cameras,” he says. “They record only what I see. I want a camera that gives me a superhuman perspective.”</p>
Ramesh RaskarFaculty, Profile, Research, Photography, Nanoscience and nanotechnology, Media Lab, School of Architecture + Planning, Computational photographyMIT launches Laboratory for Social Machines with major Twitter investmenthttp://newsoffice.mit.edu/2014/twitter-funds-mit-media-lab-program-1001
Program aims to develop collaborative technologies to tackle complex social problems.Wed, 01 Oct 2014 13:04:08 -0400MIT Media Labhttp://newsoffice.mit.edu/2014/twitter-funds-mit-media-lab-program-1001<p>The MIT Media Lab today announced the creation of the Laboratory for Social Machines (LSM), funded by a five-year, $10 million commitment from Twitter. As part of the new program, Twitter will also provide full access to its real-time, public stream of tweets, as well as the archive of every tweet dating back to the first.</p>
<p>The new initiative, based at the Media Lab, will focus on the development of new technologies to make sense of semantic and social patterns across the broad span of public mass media, social media, data streams, and digital content. Pattern discovery and data visualization will be explored to reveal interaction patterns and shared interests in relevant social systems, while collaborative tools and mobile apps will be developed to enable new forms of public communication and social organization.&nbsp;</p>
<p>A main goal for the LSM will be to create new platforms for both individuals and institutions to identify, discuss, and act on pressing societal problems. Though funded by Twitter, the LSM will have complete operational and academic independence. In keeping with the academic mission of LSM, students and staff will work across many social media and mass media platforms — including, but not limited to, Twitter.</p>
<p>“The Laboratory for Social Machines will experiment in areas of public communication and social organization where humans and machines collaborate on problems that can’t be solved manually or through automation alone,” says Deb Roy, an associate professor at the Media Lab who will lead the LSM, and who also serves as Twitter’s chief media scientist. "Social feedback loops based on analysis of public media and data can be an effective catalyst for increasing accountability and transparency — creating mutual visibility among institutions and individuals."</p>
<p>"With this investment, Twitter is seizing the opportunity to go deeper into research to understand the role Twitter and other platforms play in the way people communicate, the effect that rapid and fluid communication can have and apply those findings to complex societal issues," says Dick Costolo, CEO of Twitter.</p>
<p>"As social media leads us into the emergence of a new era of communication and engagement, the LSM, in collaboration with Twitter, will create analytical tools to help turn the vision of a new public sphere into reality," adds Joi Ito, director of the MIT Media Lab.</p>
Research, Media Lab, Social media, Communications, TwitterMedia Lab to launch wellness initiative with $1 million Robert Wood Johnson Foundation granthttp://newsoffice.mit.edu/2014/media-lab-launch-wellness-initiative-million-grant
New program, Advancing Wellness, combines academics with on-the-ground initiatives to promote better health at MIT and beyond.Mon, 08 Sep 2014 15:00:01 -0400Alexandra Kahn | MIT Media Labhttp://newsoffice.mit.edu/2014/media-lab-launch-wellness-initiative-million-grant<p>The MIT Media Lab this week launched a wellness initiative designed to spark innovation in the area of health and wellbeing, and to promote healthier workplace and lifestyle behaviors. &nbsp;</p>
<p>With support from the Robert Wood Johnson Foundation (RWJF), which is providing a $1 million grant, the new initiative will address the role of technology in shaping our health, and explore new approaches and solutions to wellbeing. The program is built around education and student mentoring; prototyping tools and technologies that support physical, mental, social, and emotional wellbeing; and community initiatives that will originate at the Media Lab, but be designed to scale.</p>
<p>The program begins with the fall course "Tools for Well Being," followed by "Health Change Lab" in the spring. In addition to concept and technology development, these courses will feature seminars by noted experts who will address a wide range of topics related to wellness. These talks will be open to the public, and made available online. Speakers will include Walter Willett, a physician and noted nutrition researcher; Chuck Czeisler, a physician and sleep expert; Ben Sawyer, a game developer for health applications; Matthew Nock, an expert in suicide prevention; Dinesh John, a researcher on health sciences and workplace activity; Lisa Mosconi, a neuroscientist studying the prevention of Alzheimer’s disease; and Martin Seligman, a founder of the field of positive psychology. More information about the courses, speakers, and presentation topics and dates can be found at: <a href="http://wellbeing.media.mit.edu">http://wellbeing.media.mit.edu</a>.</p>
<p>The RWJF grant will also support five graduate-level research fellows from the Program in Media Arts and Sciences who will be part of a year-long training program. The funding will enable each fellow to design, build, and deploy novel tools to promote wellbeing and health behavior change at the Media Lab, and then at scale.</p>
<p>One of the significant ways that this program will impact Media Lab culture is in the review of all thesis proposals submitted by students in media arts and sciences. Media Lab faculty recently added a new requirement that all proposals consider the impact of the work on human wellbeing.</p>
<p>Other Media Lab-wide aspects of the initiative include:</p>
<ul>
<li>A monthly health challenge that would engage the entire lab, with review and analysis of each month's deployment to help inform the next month's initiative.</li>
</ul>
<ul>
<li>Pairing students with one another — to build awareness of wellbeing as a social function, not just a perosonal goal, and to draw on people’s inclination to solve the problems of others differently than their own.</li>
</ul>
<p>“Wellbeing is a very hard problem that has yet to be solved by psychologists, psychiatrists, neuroscientists, biologists or other experts in the scientific community,” says Rosalind Picard, a professor of media arts and sciences and one of the three principal investigators on the initiative. “It’s time to bring MIT ingenuity to the challenge.”</p>
<p>“RWJF is working to build a culture of health in the U.S. where all people have opportunities to make healthy choices and lead healthy lifestyles. Technology has long shaped the patterns of everyday life, and it is these patterns­ — of how we work, eat, sleep, socialize, recreate and get from place to place — that largely determine our health,” says Stephen Downs, chief technology and information officer at RWJF. “We’re excited to see the Media Lab turn its creative talents and its significant influence to the challenge of developing technologies that will make these patterns of everyday life more healthy.”</p>
<p>Along with Picard, the other two principal investigators on the Advancing Wellness initiative are Pattie Maes, the Alex W. Dreyfoos Professor of Media Arts and Sciences, and Kevin Slavin, an assistant professor of media arts and sciences.</p>
<p>PhD student Karthik Dinakar, a Reid Hoffman Fellow at the Media Lab, will co-teach the two courses with the three principal investigators. Susan Silbey, the Leon and Anne Goldberg Professor of Humanities, Sociology and Anthropology, will also create independent assessments through the year on the impact of this project.</p>
Funding, Grants, Health, Health sciences and technology, Media Lab, School of Architecture + Planning, CommunityManual controlhttp://newsoffice.mit.edu/2014/startup-oblong-industries-gesture-control-interface-0905
Oblong Industries brings gesture-control technology from Hollywood to corporate conference rooms. Fri, 05 Sep 2014 10:15:00 -0400Rob Matheson | MIT News Officehttp://newsoffice.mit.edu/2014/startup-oblong-industries-gesture-control-interface-0905<p>When you imagine the future of gesture-control interfaces, you might think of the popular science-fiction films “Minority Report” (2002) or “Iron Man” (2008). In those films, the protagonists use their hands or wireless gloves to seamlessly scroll through and manipulate visual data on a wall-sized, panoramic screen.</p>
<p>We’re not quite there yet. But the brain behind those Hollywood interfaces, MIT alumnus John Underkoffler ’88, SM ’91, PhD ’99 — who served as scientific advisor for both films —&nbsp;has been bringing a more practical version of that technology to conference rooms of Fortune 500 and other companies for the past year. &nbsp;</p>
<p>Underkoffler’s company, Oblong Industries, has developed a platform called g-speak, based on MIT research, and a collaborative-conferencing system called Mezzanine that allows multiple users to simultaneously share and control digital content across multiple screens, from any device, using gesture control.</p>
<p>Overall, the major benefit in such a system lies in boosting productivity during meetings, says Underkoffler, Oblong’s CEO. This is especially true for clients who tend to pool resources into brainstorming and whose meeting rooms may remain open all day, every day.</p>
<p>“If you can make those meetings synthetically productive —&nbsp;not just times for people to check in, produce status reports, or check email surreptitiously under the table —&nbsp;that can be electrifying force for the enterprise,” he says.</p>
<p>Mezzanine surrounds a conference room with multiple screens, as well as the “brains” of the system (a small server) that controls and syncs everything. Several Wii-like wands, with six degrees of freedom, allow users to manipulate content — such as text, photos, videos, maps, charts, spreadsheets, and PDFs — depending on certain gestures they make with the wand.</p>
<p>That system is built on g-speak, a type of operating system — or a so-called “spatial operating environment” — used by developers to create their own programs that run like Mezzanine.</p>
<p>“G-speak programs run in a distributed way across multiple machines and allow concurrent interactions for multiple people,” Underkoffler says. “This shift in thinking — as if from single sequential notes to chords and harmonies — is powerful."</p>
<p>Oblong’s clients include Boeing, Saudi Aramco, SAP, General Electric, and IBM, as well as&nbsp;government agencies and academic institutions, such as Harvard University’s Graduate School of Design. Architects and real estate firms are also using the system for structural designing.</p>
<p><strong>Putting pixels in the room</strong></p>
<p>G-speak has its roots in a 1999 MIT Media Lab project —&nbsp;co-invented by Underkoffler in Professor Hiroshi Ishii’s Tangible Media Group — called “Luminous Room,” which enabled all surfaces to hold data that could be manipulated with gestures. “It literally put pixels in the room with you,” Underkoffler says.</p>
<p>The group designed light bulbs, called “1/0 Bulbs,” that not only projected information, but also collected the information from a surface it projected onto. That meant the team could make any projected surface a veritable computer screen, and the data could interact with, and be controlled by, physical objects.</p>
<p>They also assigned pixels three-dimensional coordinates. Imagine, for example, if you sat down in a chair at a table, and tried to describe where the front, left corner of that table was located in physical space. “You’d say that corner is this far off the floor, this far to the right of my chair, and this much in front of me, among other things,” Underkoffler explains. “We started doing that with pixels.”</p>
<p>One application for urban planners involved placing small building models onto a 1/0 Bulb projected table, “and the pixels surrounded the model,” Underkoffler says. This provided three-dimensional spatial information, from which the program casted accurate, digital shadows from the models onto the table. (Changing the time on a digital clock changed the direction of the shadows.)</p>
<p>In another application, the researchers used a glass vase to manipulate digital text and image boxes that were projected onto a whiteboard. The digital boxes were linked to the vase in a circle via digital “springs.” When the vase moved, all the graphics followed. When the vase rotated, the graphics bunched together and “self-stored” into the vase; when the vase rotated again, the graphics reappeared in their first form.</p>
<p>These initial concepts —&nbsp;using the whole room as a digital workplace —&nbsp;became the foundation for g-speak. “I really wanted to get the ideas out into the world in a form that everyone could use,” Underkoffler says. “Generally, that means commercial form, but the world of movies came calling first.”</p>
<p><strong>&nbsp;“The world’s largest focus group”</strong></p>
<p>Underkoffler was recruited as scientific advisor for Steven Spielberg’s “Minority Report” after meeting the film’s crew, who were searching for novel technology ideas at the Media Lab. Later, in 2003, Underkoffler reprised his behind-the-scenes gig for Ang Lee’s “Hulk,” and, in 2008, for Jon Favreau’s “Iron Man,” which both depicted similar technologies.</p>
<p>Seeing this technology on the big screen inspired Underkoffler to refine his MIT technology, launch Oblong in 2006, and build early g-speak prototypes — glove-based systems that eventually ended up with the company’s first customer, Boeing.</p>
<p>Having tens of millions of viewers seeing the technology on the big screen, however, offered a couple of surprising perks for Oblong, which today is headquartered in Los Angeles, with nine other offices and demo rooms in cities including Boston, New York, and London. “It might have been the world’s largest focus group,” Underkoffler says.</p>
<p>Those enthused by the technology, for instance, started getting in touch with Underkoffler to see if the technology was real. Additionally, being part of a big-screen production helped Underkoffler and Oblong better explain their own technology to clients, Underkoffler says. In such spectacular science-fiction films, technology competes for viewer attention and, yet, it needs to be simplified so viewers can understand it clearly.</p>
<p>“When you take technology from a lab like at MIT, and you need to show it in a film, the process of refining and simplifying those ideas so they’re instantly legible on screen is really close to the refinement you need to undertake if you’re turning that lab work into a product,” he says. “It was enormously valuable to us to strip away everything in the system that wasn’t necessary and leave a really compact core of user-interface ideas we have today.”</p>
<p>After years of writing custom projects for clients on g-speak, Oblong turned the most-requested features of these jobs — such as having cross-platform and multiple-user capabilities —&nbsp;into Mezzanine. “It was the first killer application we could write on top of g-speak,” he says. “Building a universal, shared-pixel workspace has enormous value no matter what your business is.”</p>
<p>Today, Oblong is shooting for greater ubiquity of its technology. But how far away are we from a consumer model of Mezzanine? It could take years, Underkoffler admits: “But we really hope to radically tilt the whole landscape of how we think about computers and user interface.”</p>
Innovation and Entrepreneurship (I&E), Startups, Alumni/ae, Media Lab, Tangible Media Group, School of Architecture + Planning, Computer science and technology, Gestural interfaces, MoviesMoving forwardhttp://newsoffice.mit.edu/2014/mit-student-niaja-farve-profile-0819
PhD student Niaja Farve combines research, entrepreneurship, outreach, and indefatigable drive.Tue, 19 Aug 2014 00:00:03 -0400Zach Wener-Fligner | MIT News Officehttp://newsoffice.mit.edu/2014/mit-student-niaja-farve-profile-0819<p>For young Niaja Farve, one thing was certain: She was going to college, whether she liked it or not.</p>
<p>She grew up in Gaithersburg, Md., with her mother, who made it clear that Farve would be the first in her family to pursue education past high school. “From the get-go, she was very serious about the whole college thing,” Farve says, “and very serious about going to the best place possible that we could afford.”</p>
<p>Her mother made sure Farve kept her grades up and stayed attuned to scholarship opportunities. Birthday money went straight into a college fund. Farve showed an aptitude for science and math early on; when, in high school, a teacher mentioned offhand that, “If you’re good at math and science, you should do engineering,” the discipline piqued her interest.</p>
<p>“Engineering was interesting because it was application,” Farve says. “I felt like scientists were the people who discovered new things, and engineers applied them to things people could actually use. I’ve always been interested in things that could actually impact everyday people.”</p>
<p>Farve loved music and initially hoped to be an audio engineer. But her mother dissuaded her: If she majored in electrical engineering, she could still go on to specialize in audio if she wanted.</p>
<p>During Farve’s college search, one school loomed large: She wanted to go to MIT — “because it’s MIT” — but didn’t get in. She was accepted at Carnegie Mellon University and Cornell University, but ended up picking Morgan State University, a historically black institution in Baltimore that offered a large scholarship.</p>
<p>Farve wasn’t discouraged by MIT’s initial rejection. On the contrary, her resolve to prove herself was strengthened: “I’ll bust my butt and learn as much as possible,” she told herself. “And then hopefully I can get into MIT for grad school.”</p>
<p>“That was the ultimate goal,” she says.</p>
<p><strong>Coming full circle</strong></p>
<p>Four years later, during her senior year at Morgan State, Farve was marooned in her dorm room during a snowstorm when an email message appeared in her inbox: She had been accepted into the master’s program in electrical engineering and computer science at MIT.</p>
<p>But once Farve matriculated, MIT brought challenges. Adjusting to the academic intensity was difficult. Graduate school, with its focus on research rather than classes, was unfamiliar. And sometimes, in her classes, unconscious biases seemed to emerge: In forming groups during class, for example, she would ask herself, “Why am I the only one without a partner?” Even once she finished her master’s degree and decided to continue on for a PhD, the insecurity persisted.</p>
<p>“The first three years was the ‘impostor syndrome,’” Farve says, referring to the attribution of success to luck, chance, or even a mistake by an admissions committee. “Telling myself, ‘OK, nobody made a mistake — if they did, they would have figured it out by now.”</p>
<p>But those days are now in the past. “Recently, it’s been trying to do more than just OK,” she says. “It’s been trying to excel here.”</p>
<p>Part of what has helped Farve gain confidence at MIT is her involvement in community activities that provide support. She serves as vice president of the Black Graduate Student Association, which offers study breaks and cultural learning activities, and has served as president of the Academy of Minority Engineers, whose weekly meetings provide an opportunity to talk about goals and impediments, and to brag and complain in a nonjudgmental environment.</p>
<p><strong>Creating real value through research</strong></p>
<p>Farve has always taken a practical, entrepreneurial approach to her work. At Morgan State, she started her own record label, Forte, stoked by her passion for good music. Engineering courses dominated her time, and she didn’t end up pursuing the label. But creating a legal entity taught her a lot about the nitty-gritty logistics in starting a business.</p>
<p>Farve’s doctoral research has also lent itself to entrepreneurship. When she began, she thought that she would work to create a software platform for eye-tracking apps on mobile devices. But during the work, she found herself drawn into the creation of the apps themselves, and developed a niche in apps with a sports, health, and wellness focus.</p>
<p>With the support of her advisor, Pattie Maes, the Alexander W. Dreyfoos Professor of Media Technology in the MIT Media Lab, Farve shifted the focus of her research to apps. So far, she’s worked on a motley assortment: There’s “Move Your Glass,” which recognizes physical activity and encourages healthier choices; “Stat,” which assists basketball coaches by providing real-time recommendations based on player-specific data; and “Smile Catcher,” which turns making others smile into a game.</p>
<p>She’s also pursuing entrepreneurial activities outside of her research. Last year, Farve took MIT’s well-known course <a href="http://newsoffice.mit.edu/2013/engineering-course-demystifies-entrepreneurship-0528">6.933 (The Founder’s Journey)</a>, where she and a team created a nonprofit called i-Trek (“I Turn Research Into Empowerment and Knowledge”). The organization provides research funding and mentorship for students in science, technology, engineering, and mathematics, particularly those at smaller colleges who might not otherwise have such opportunities. I-Trek received a $10,000 educational challenge grant through the course.</p>
<p>“I think it definitely would have been a big help if something like this existed for me,” she says modestly. “I kept lucking out and finding the right people and being in the right place at the right time. So it kind of dawned on me that other people probably aren’t as lucky.”</p>
Research, School of Engineering, Electrical Engineering & Computer Science (eecs), Media Lab, Students, Entrepreneurship, Diversity, STEM educationOur connection to contenthttp://newsoffice.mit.edu/2014/consumer-brain-connection-with-media-0819
Using neuroscience tools, Innerscope Research explores the connections between consumers and media. Tue, 19 Aug 2014 00:00:00 -0400Rob Matheson | MIT News Officehttp://newsoffice.mit.edu/2014/consumer-brain-connection-with-media-0819<p>It’s often said that humans are wired to connect: The neural wiring that helps us read the emotions and actions of other people may be a foundation for human empathy.</p>
<p>But for the past eight years, MIT Media Lab spinout Innerscope Research has been using neuroscience technologies that gauge subconscious emotions by monitoring brain and body activity to show just how powerfully we also connect to media and marketing communications.</p>
<p>“We are wired to connect, but that connection system is not very discriminating. So while we connect with each other in powerful ways, we also connect with characters on screens and in books, and, we found, we also connect with brands, products, and services,” says Innerscope’s chief science officer, Carl Marci, a social neuroscientist and former Media Lab researcher.</p>
<p>With this core philosophy, Innerscope —&nbsp;co-founded at MIT by Marci and Brian Levine MBA ’05&nbsp;—&nbsp;aims to offer market research that’s more advanced than traditional methods, such as surveys and focus groups, to help content-makers shape authentic relationships with their target consumers.</p>
<p>“There’s so much out there, it’s hard to make something people will notice or connect to,” Levine says. “In a way, we aim to be the good matchmaker between content and people.”</p>
<p>So far, it’s drawn some attention. The company has conducted hundreds of studies and more than 100,000 content evaluations with its host of Fortune 500 clients, which include Campbell’s Soup, Yahoo, and Fox Television, among others.</p>
<p>And Innerscope’s studies are beginning to provide valuable insights into the way consumers connect with media and advertising. Take, for instance, its recent project to measure audience engagement with television ads that aired during the Super Bowl.</p>
<p>Innerscope first used biometric sensors to capture fluctuations in heart rate, skin conductance, breathing, and motion among 80 participants who watched select ads and sorted them into “winning” and “losing” commercials (in terms of emotional responses). Then their collaborators at Temple University’s Center for Neural Decision Making used functional magnetic resonance imaging (fMRI) brain scans to further measure engagement.</p>
<p>Ads that performed well elicited increased neural activity in the amygdala (which drives emotions), superior temporal gyrus (sensory processing), hippocampus (memory formation),&nbsp;and lateral prefrontal cortex&nbsp;(behavioral control).</p>
<p>“But what was really interesting was the high levels of activity in the&nbsp;area known as the precuneus —&nbsp;involved in feelings of self-consciousness — where it is believed that we keep our identity. The really powerful ads generated a heightened sense of personal identification,” Marci says.</p>
<p>Using neuroscience to understand marketing communications and, ultimately, consumers’ purchasing decisions is still at a very early stage, Marci admits —&nbsp;but the Super Bowl study and others like it represent real progress. “We’re right at the cusp of coherent, neuroscience-informed measures of how ad engagement works,” he says.</p>
<p><strong>Capturing “biometric synchrony”</strong></p>
<p>Innerscope’s arsenal consists of 10 tools: Electroencephalography and fMRI technologies measure brain waves and structures. Biometric tools — such as wristbands and attachable sensors — track heart rate, skin conductance, motion, and respiration, which reflect emotional processing. And then there’s eye-tracking, voice-analysis, and facial-coding software, as well as other tests to complement these measures.</p>
<p>Such technologies were used for market research long before the rise of Innerscope. But, starting at MIT, Marci and Levine began developing novel algorithms, informed by neuroscience, that find trends among audiences pointing to exact moments when an audience is engaged together — in other words, in “biometric synchrony.”</p>
<p>Traditional algorithms for such market research would average the responses of entire audiences, Levine explains. “What you get is an overall level of arousal —&nbsp;basically, did they love or hate the content?” he says. “But how is that emotion going to be useful? That’s where the hole was.”</p>
<p>Innerscope’s algorithms tease out real-time detail from individual reactions — comprising anywhere from 500 million to 1 billion data points —&nbsp;to locate instances when groups’ responses (such as surprise, excitement, or disappointment) collectively match.</p>
<p>As an example, Levine references an early test conducted using an episode of the television show “Lost,” where a group of strangers are stranded on a tropical island.</p>
<p>Levine and Marci attached biometric sensors to six separate groups of five participants. At the long-anticipated moment when the show’s “monster” is finally revealed, nearly everyone held their breath for about 10 to 15 seconds.</p>
<p>“What our algorithms are looking for is this group response. The more similar the group response, the more likely the stimuli is creating that response,” Levine explains. “That allows us to understand if people are paying attention and if they’re going on a journey together.”</p>
<p><strong>Getting on the map</strong></p>
<p>Before MIT, Marci was a neuroscientist studying empathy, using biometric sensors and other means to explore how empathy between patient and doctor can improve patient health.</p>
<p>“I was lugging around boxes of equipment, with wires coming out and videotaping patients and doctors. Then someone said, ‘Hey, why don’t you just go to the MIT Media Lab,’” Marci says. “And I realized it had the resources I needed.”</p>
<p>At the Media Lab, Marci met behavioral analytics expert and collaborator Alexander “Sandy” Pentland, the Toshiba Professor of Media Arts and Sciences, who helped him set up Bluetooth sensors around Massachusetts General Hospital to track emotions and empathy between doctors and patients with depression.&nbsp; &nbsp;</p>
<p>During this time, Levine, a former Web developer, had enrolled at MIT, splitting his time between the MIT Sloan School of Management and the Media Lab. “I wanted to merge an idea to understand customers better with being able to prototype anything,” he says.</p>
<p>After meeting Marci through a digital anthropology class, Levine proposed that they use this emotion-tracking technology to measure the connections of audiences to media. Using prototype sensor vests equipped with heart-rate monitors, stretch receptors, accelerometers, and skin-conductivity sensors, they trialed the technology with students around the Media Lab.</p>
<p>All the while, Levine pieced together Innerscope’s business plan in his classes at MIT Sloan, with help from other students and professors. “The business-strategy classes were phenomenal for that,” Levine says. “Right after finishing MIT, I had a complete and detailed business plan in my hands.”</p>
<p>Innerscope launched in 2006. But a 2008 study really accelerated the company’s growth. “NBC Universal had a big concern at the time: DVR,” Marci says. “Were people who were watching the prerecorded program still remembering the ads, even though they were clearly skipping them?”</p>
<p>Innerscope compared facial cues and biometrics from people who fast-forwarded ads against those who didn’t. The results were unexpected: While fast-forwarding, people stared at the screen blankly, but their eyes actually caught relevant brands, characters, and text. Because they didn’t want to miss their show, while fast-forwarding, they also had a heightened sense of engagement, signaled by leaning forward and staring fixedly.</p>
<p>“What we concluded was that people don’t skip ads,” Marci says. “They’re processing them in a different way, but they’re still processing those ads. That was one of those insights you couldn’t get from a survey. That put us on the map.”</p>
<p>Today, Innerscope is looking to expand. One project is bringing kiosks to malls and movie theaters, where the company recruits passersby for fast and cost-effective results. (Wristbands monitor emotional response, while cameras capture facial cues and eye motion.) The company is also aiming to try applications in mobile devices, wearables, and at-home sensors.</p>
<p>“We’re rewiring a generation of Americans in novel ways and moving toward a world of ubiquitous sensing,” Marci says. “We’ll need data science and algorithms and experts that can make sense of all that data.”</p>
Innovation and Entrepreneurship (I&E), Startups, Alumni/ae, Media Lab, MIT Sloan School of Management, School of Architecture + Planning, Affective computing, Computer science and technology, Algorithms, Wearable sensors, Research, MediaVision-correcting displayshttp://newsoffice.mit.edu/2014/new-display-technology-automatically-corrects-for-vision-defects-0731
Technology could lead to e-readers, smartphones, and displays that let users dispense with glasses.Thu, 31 Jul 2014 00:00:02 -0400Larry Hardesty | MIT News Officehttp://newsoffice.mit.edu/2014/new-display-technology-automatically-corrects-for-vision-defects-0731<p>Researchers at the MIT Media Laboratory and the University of California at Berkeley have developed a new display technology that automatically corrects for vision defects — no glasses (or contact lenses) required.</p>
<p>The technique could lead to dashboard-mounted GPS displays that farsighted drivers can consult without putting their glasses on, or electronic readers that eliminate the need for reading glasses, among other applications.</p>
<p>“The first spectacles were invented in the 13th century,” says Gordon Wetzstein, a research scientist at the Media Lab and one of the display’s co-creators. “Today, of course, we have contact lenses and surgery, but it’s all invasive in the sense that you either have to put something in your eye, wear something on your head, or undergo surgery. We have a different solution that basically puts the glasses on the display, rather than on your head. It will not be able to help you see the rest of the world more sharply, but today, we spend a huge portion of our time interacting with the digital world.”</p>
<p>Wetzstein and his colleagues describe their display in a paper they’re presenting in August at Siggraph, the premier graphics conference. Joining him on the paper are Ramesh Raskar, the NEC Career Development Professor of Media Arts and Sciences and director of the Media Lab’s Camera Culture group, and Berkeley’s Fu-Chung Huang and Brian Barsky.</p>
<p><strong>Knowing the angles</strong></p>
<p>The display is a variation on a glasses-free 3-D technology also developed by the Camera Culture group. But where the 3-D display projects slightly different images to the viewer’s left and right eyes, the vision-correcting display projects slightly different images to different parts of the viewer’s pupil.</p>
<p>A vision defect is a mismatch between the eye’s focal distance — the range at which it can actually bring objects into focus — and the distance of the object it’s trying to focus on. Essentially, the new display simulates an image at the correct focal distance — somewhere between the display and the viewer’s eye.</p>
<p>The difficulty with this approach is that simulating a single pixel in the virtual image requires multiple pixels of the physical display. The angle at which light should seem to arrive from the simulated image is sharper than the angle at which light would arrive from the same image displayed on the screen. So the physical pixels projecting light to the right side of the pupil have to be offset to the left, and the pixels projecting light to the left side of the pupil have to be offset to the right.</p>
<p>The use of multiple on-screen pixels to simulate a single virtual pixel would drastically reduce the image resolution. But this problem turns out to be very similar to a problem that Wetzstein, Raskar, and colleagues solved in their 3-D displays, which also had to project different images at different angles.</p>
<p>The researchers discovered that there is, in fact, a great deal of redundancy between the images required to simulate different viewing angles. The algorithm that computes the image to be displayed onscreen can exploit that redundancy, allowing individual screen pixels to participate simultaneously in the projection of different viewing angles. The MIT and Berkeley researchers were able to adapt that algorithm to the problem of vision correction, so the new display incurs only a modest loss in resolution.</p>
<p>In the researchers’ prototype, however, display pixels do have to be masked from the parts of the pupil for which they’re not intended. That requires that a transparency patterned with an array of pinholes be laid over the screen, blocking more than half the light it emits.</p>
<p><strong>Multitasking</strong></p>
<p>But early versions of the 3-D display faced the same problem, and the MIT researchers solved it by instead using two liquid-crystal displays (LCDs) in parallel. Carefully tailoring the images displayed on the LCDs to each other allows the system to mask perspectives while letting much more light pass through. Wetzstein envisions that commercial versions of a vision-correcting screen would use the same technique.</p>
<p>Indeed, he says, the same screens could both display 3-D content and correct for vision defects, all glasses-free. They could also reproduce another Camera Culture project, which <a href="http://newsoffice.mit.edu/2010/itw-eyes">diagnoses</a> vision defects. So the same device could, in effect, determine the user’s prescription and automatically correct for it.</p>
<p>“Most people in mainstream optics would have said, ‘Oh, this is impossible,’” says Chris Dainty, a professor at the University College London Institute of Ophthalmology and Moorfields Eye Hospital. “But Ramesh’s group has the art of making the apparently impossible possible.”</p>
<div class="cms-placeholder-content-video"></div>
<p>“The key thing is they seem to have cracked the contrast problem,” Dainty adds. “In image-processing schemes with incoherent light — normal light that we have around us, nonlaser light — you’re always dealing with intensities. And intensity is always positive (or zero). Because of that, you’re always adding positive things, so the background just gets bigger and bigger and bigger. And the signal-to-background, which is contrast, therefore gets smaller as you do more processing. It’s a fundamental problem.”</p>
<p>Dainty believes that the most intriguing application of the technology is in dashboard displays. “Most people over 50, 55, quite probably see in the distance fine, but can’t read a book,” Dainty says. “In the car, you can wear varifocals, but varifocals distort the geometry of the outside world, so if you don’t wear them all the time, you have a bit of a problem. There, [the MIT and Berkeley researchers] have a great solution.”</p>
Display screens, Light, Media Lab, 3-D, Research, School of Architecture + PlanningA market for emotionshttp://newsoffice.mit.edu/2014/with-emotion-tracking-software-affectiva-attracts-clients-mood-aware-internet-0731
With emotion-tracking software, Affectiva attracts big-name clients, aims for “mood-aware” Internet. Thu, 31 Jul 2014 00:00:00 -0400Rob Matheson | MIT News Officehttp://newsoffice.mit.edu/2014/with-emotion-tracking-software-affectiva-attracts-clients-mood-aware-internet-0731<p>Emotions can be powerful for individuals. But they’re also powerful tools for content creators, such as advertisers, marketers, and filmmakers. By tracking people’s negative or positive feelings toward ads — via traditional surveys and focus groups — agencies can tweak and tailor their content to better satisfy consumers.</p>
<p>Increasingly, over the past several years, companies developing emotion-recognition technology — which gauges subconscious emotions by analyzing facial cues&nbsp;— have aided agencies on that front.</p>
<p>Prominent among these companies is MIT spinout Affectiva, whose advanced emotion-tracking software, called Affdex, is based on years of MIT Media Lab research. Today, the startup is attracting some big-name clients, including Kellogg and Unilever.</p>
<p>Backed by more than $20 million in funding, the startup — which has amassed a vast facial-expression database — is also setting its sights on a “mood-aware” Internet that reads a user’s emotions to shape content. This could lead, for example, to more relevant online ads, as well as enhanced gaming and online-learning experiences.</p>
<p>“The broad goal is to become the emotion layer of the Internet,” says Affectiva co-founder Rana el Kaliouby, a former MIT postdoc who invented the technology. “We believe there’s an opportunity to sit between any human-to-computer, or human-to-human interaction point, capture data, and use it to enrich the user experience.”</p>
<p><strong>Ads and apps</strong></p>
<p>In using Affdex, Affectiva recruits participants to watch advertisements in front of their computer webcams, tablets, and smartphones. Machine learning algorithms track facial cues, focusing prominently on the eyes, eyebrows, and mouth. A smile, for instance, would mean the corners of the lips curl upward and outward, teeth flash, and the skin around their eyes wrinkles.</p>
<p>Affdex then infers the viewer’s emotions —&nbsp;such as enjoyment, surprise, anger, disgust, or confusion — and pushes the data to a cloud server, where Affdex aggregates the results from all the facial videos (sometimes hundreds), which it publishes on a dashboard.</p>
<p>But determining whether a person “likes” or “dislikes” an advertisement takes advanced analytics. Importantly, the software looks for “hooking” the viewers in the first third of an advertisement, by noting increased attention and focus, signaled in part by less fidgeting and fixated gazes.</p>
<p>Smiles can indicate that a commercial designed to be humorous is, indeed, funny. But if a smirk — subtle, asymmetric lip curls, separate from smiles —&nbsp;comes at a moment when information appears on the screen, it may indicate skepticism or doubt.&nbsp;</p>
<p>A furrowed brow may signal confusion or cognitive overload. “Sometimes that’s by design: You want people to be confused, before you resolve the problem. But if the furrowed brow persists throughout the ad, and is not resolved by end, that’s a red flag,” el Kaliouby says.</p>
<p>Affectiva has been working with advertisers to optimize their marketing content for a couple of years. In a recent case study with Mars, for example, Affectiva found that the client’s chocolate ads elicited the highest emotional engagement, while its food ads elicited the least, helping predict short-term sales of these products.</p>
<p>In that study, some 1,500 participants from the United States and Europe viewed more than 200 ads to track their emotional responses, which were tied to the sales volume for different product lines. These results were combined with a survey to increase the accuracy of predicting sales volume.</p>
<p>“Clients usually take these responses and edit the ad, maybe make it shorter, maybe change around the brand reveal,” el Kaliouby says. “With Affdex, you see on a moment-by-moment basis, who’s really engaged with ad, and what’s working and what’s not.”</p>
<p>This year, the startup released a developer kit for mobile app designers. Still in their early stages, some of the apps are designed for entertainment, such as people submitting “selfies” to analyze their moods and sharing them across social media.</p>
<p>Still others could help children with autism better interact, el Kaliouby says — such as games that make people match facial cues with emotions. “This would focus on pragmatic training, helping these kids understand the meaning of different facial expressions and how to express their own,” she says.</p>
<p><strong>Entrenched in academia</strong></p>
<p>While several companies are commercializing similar technology, Affectiva is unusual in that it is “entrenched in academia,” el Kaliouby says: Years of data-gathering have “trained” the algorithms to be very discerning. &nbsp;</p>
<p>As a PhD student at Cambridge University in the early 2000s, el Kaliouby began developing facial-coding software. She was inspired, in part, by her future collaborator and Affectiva co-founder, Rosalind Picard, an MIT professor who pioneered the field of affective computing —&nbsp;where machines can recognize, interpret, process, and simulate human affects.</p>
<p>Back then, the data that el Kaliouby had access to consisted of about 100 facial expressions gathered from photos — and those 100 expressions were fairly prototypical. “To recognize surprise, for example, we had this humongous surprise expression. This meant that if you showed the computer an expression of a person that’s somewhat surprised or subtly shocked, it wouldn’t recognize it,” el Kaliouby says.</p>
<p>In 2006, el Kaliouby came to the Media Lab to work with Picard to expand what the technology can do. Together, they quickly started applying the facial-coding technology to autism research and training the algorithms by collecting vast stores of data.</p>
<p>“Coming from a traditional research background, the Media Lab was completely different,” el Kaliouby says. “You prototype, prototype, prototype, and fail fast. It’s very startup-minded.”</p>
<p>Among their first prototypes was a Google Glass-type invention with a camera that could read facial expressions and provide real-time feedback to the wearer via a Bluetooth headset. For instance, auditory cues would provide feedback, such as, “This person is bored” or, “This person is confused.”</p>
<p>However, inspired by increasing industry attention —- and with a big push by Frank Moss, then the Media Lab’s director —&nbsp;they soon ditched the wearable prototype to build a cloud-based version of the software, founding Affectiva in 2009.</p>
<p>Early support from a group of about eight mentors at MIT’s Venture Mentoring Service helped the Affectiva team connect to industry and shape its pitch —&nbsp;by focusing on the value proposition,&nbsp;not the technology. “We learned to build a product story instead of a technology story — that was key,” el Kaliouby says.</p>
<p>To date, Affectiva has amassed a dataset of about 1.7 million facial expressions, roughly 2 billion data points, ­from people of all races, across 70 different countries — the largest facial-coding dataset in the world, el Kaliouby says — training its software’s algorithms to discern expressions from all different face types and skin colors. It can also track faces that are moving, in all types of lighting, and can avoid tracking any other movement on screen.</p>
<p><strong>A “mood-aware” Internet</strong></p>
<p>One of Affectiva’s long-term goals is to usher in a “mood-aware” Internet to improve users’ experiences. Imagine an Internet that’s like walking into a large outlet store with sales representatives, el Kaliouby says.</p>
<p>“At the store, the salespeople are reading your physical cues in real time, and assessing whether to approach you or not, and how to approach you,” she says. “Websites and connected devices of the future should be like this, very mood-aware.”</p>
<p>Sometime in the future, this could mean computer games that adapt in difficulty and other game variables, based on user reaction. But more immediately, it could work for online learning.</p>
<p>Already, Affectiva has conducted pilot work for online learning, where it captured data on facial engagement to predict learning outcomes. For this, the software indicates, for instance, if a student is bored, frustrated, or focused — which is especially valuable for prerecorded lectures, el Kaliouby says.</p>
<p>“To be able to capture that data, in real time, means educators can adapt that learning experience and change the content to better engage students — making it, say, more or less difficult — and change feedback to maximize learning outcomes,” el Kaliouby says. “That’s one application we’re really excited about.”</p>
Innovation and Entrepreneurship (I&E), Startups, Alumni/ae, Affective computing, Computer science and technology, Algorithms, Apps, Autism, Media Lab, School of Architecture + PlanningScratchJr: Coding for kindergartenhttp://newsoffice.mit.edu/2014/scratchjr-coding-kindergarten
With a new app, young children learn important skills as they program stories and games.Wed, 30 Jul 2014 09:35:00 -0400Alexandra Kahn | MIT Media Labhttp://newsoffice.mit.edu/2014/scratchjr-coding-kindergarten<p><span style="line-height:1.6">Can children learn to code at the same age they’re learning to tie their shoes?</span></p>
<p>That’s the idea behind ScratchJr, a free iPad app released this week by researchers at the MIT Media Lab, Tufts University, and Playful Invention Company (PICO).</p>
<p>With ScratchJr (<a href="http://scratchjr.org" target="_blank">scratchjr.org</a>), children ages five to seven can program their own interactive stories and games. In the process, they learn how to create and express themselves with the computer, not just interact with it.</p>
<p>"As young children code with ScratchJr, they develop design and problem-solving skills that are foundational for later academic success," said Marina Umaschi Bers, professor in the Eliot-Pearson Department of Child Study and Human Development at Tufts, and director of the Tufts’ Development Technologies research group, which co-developed ScratchJr. "And by using math and language in a meaningful context, they develop early-childhood numeracy and literacy."</p>
<div class="cms-placeholder-content-video"></div>
<p>ScratchJr was inspired by the popular <a href="http://scratch.mit.edu" target="_blank">Scratch programming language</a>, developed by the MIT Media Lab’s Lifelong Kindergarten research group and used by millions of young people (ages eight and up) around the world. The ScratchJr team redesigned the interface and programming language to match young children’s cognitive, personal, social, and emotional development. Even children who have not yet learned to read can create projects with ScratchJr.</p>
<p>"Coding is the new literacy," said Mitchel Resnick, MIT Professor of Learning Research and head of the Media Lab’s Lifelong Kindergarten group. "Just as writing helps you organize your thinking and express your ideas, the same is true for coding. In the past, coding was seen as too difficult for most people. But we think coding should be for everyone, just like writing."</p>
<p>To program in ScratchJr, children snap together graphical blocks to make characters move, jump, dance, and sing. Children can modify characters in the paint editor, add their own voices and sounds, even insert photos of themselves — then use the programming blocks make their characters come to life.</p>
<p>The teams at Tufts, the MIT Media Lab, and PICO collaborated on the design, development, and evaluation of the ScratchJr software. Now that the iPad app is available, the teams will turn their attention to developing versions for other platforms (such as Android), adding new features for sharing ScratchJr projects, and developing curriculum and support materials for teachers and parents.</p>
<p><br />
The ScratchJr project received funding from the National Science Foundation (DRL-1118664), Code-to-Learn Foundation, LEGO Foundation, British Telecommunications and a successful Kickstarter campaign.</p>Media Lab, STEM education, Scratch, Apps, Education, teaching, academics, Learning, Programming, ScratchJr, School of Architecture + PlanningMental-health monitoring goes mobilehttp://newsoffice.mit.edu/2014/mental-health-monitoring-goes-mobile-0716
Startup Ginger.io analyzes smartphone data to remotely predict when patients with mental illnesses are symptomatic. Wed, 16 Jul 2014 00:00:02 -0400Rob Matheson | MIT News Officehttp://newsoffice.mit.edu/2014/mental-health-monitoring-goes-mobile-0716<p>Behavioral health analytics startup Ginger.io sees smartphones as “automated diaries” containing valuable insight into the mental well-being of people with mental illnesses.</p>
<p>Smartphones produce significant behavioral data —&nbsp;such as location, calling and texting records, and app usage —&nbsp;that map out a user’s daily patterns.&nbsp;Finding significant deviations in these patterns may signal that something’s amiss, says Ginger.io CEO and co-founder Anmol Madan SM ’05, PhD ’11.&nbsp;</p>
<p>“If someone is depressed, for instance, they isolate themselves, have a hard time getting up to go to school or work, they’re lethargic and don’t like communicating with others the way they typically do,” Madan explains. “Turns out you see those same features change in their mobile-phone sensor data in their movement, features, and interactions with others.”</p>
<p>That’s the concept driving Ginger.io’s recently released commercial app, based on years of MIT Media Lab research, which is becoming increasingly popular in the health care industry. By passively analyzing mobile data, the app can detect if a patient with mental illness —&nbsp;such as depression, anxiety and bipolar disorders, or schizophrenia — is acting symptomatic.</p>
<p>Symptoms may include lethargy (decreased movement captured by motion sensors) or infrequent texts (captured by the message log). If the app detects an unusual pattern, it sends text messages to both the patient and his or her health care provider, who can keep tabs and intervene if necessary.</p>
<p>Currently, the app is being used by thousands of patients at more than 25 health care institutions and academic centers across the United States, driving several thousand alerts in the past few months.</p>
<p>It has also been used in research. At the University of California at San Francisco, for instance, the app is being used to look at the role behavioral data plays in heart disease. And with Forsyth Medical Center in North Carolina, Ginger.io is studying how data can help discern behavioral differences in diabetes patients. If a diabetic patient becomes depressed, for example, she may stop taking medication — something called “noncompliance” — which could lead to costly visits to the doctor or emergency room. But with the app, a nurse could call and remind the patient to continue taking medication.</p>
<p>In a recent trial at the University of California at Davis, the app was used as a low-cost method of monitoring youth with psychosis. “Early intervention in psychosis is crucial,” Madan says, “but it requires expensive, intensive patient assessment and monitoring.”</p>
<p>Ginger.io, on the other hand, can be used to collect data and, potentially, prompt early intervention of symptomatic patients to prevent relapses, improving outcomes and reducing costs. The data provided by Ginger.io in this study was used to model daily fluctuations in patient symptoms, and results indicated that Ginger.io is a feasible platform for monitoring those with early psychosis.</p>
<p><strong>Catching deviations </strong></p>
<p>To use the Ginger.io app, patients fill out a brief survey about their conditions, treatment, and health care provider. The app then begins passively collecting millions of interaction and location data points. Motion data is captured by a phone’s accelerometers. Global positioning systems pinpoint where a person visits. It also logs the duration and frequency of phone calls and texting patterns — though it ignores information about who a patient contacts.</p>
<p>For a few days, the app records a person’s normal patterns. After that, algorithms look for significant deviations; if there are any, the app spells them out on the screen. A text may read, for instance, “On Wednesday, you spoke with two fewer people” (signaling isolation), or “You traveled 50 percent less on Thursday” (signaling lethargy). &nbsp;</p>
<p>If the algorithms detect enough deviations to determine that “the patient is behaving in a way that’s inconsistent with usual,” Madan says, it alerts the health care provider via Ginger.io’s Web platform. The provider may see a text readout explaining, for instance, that the patient is increasingly acting isolated or lethargic. Or they may see a green box next to the patient’s name turn to red, signaling a need for intervention.</p>
<p>“In most of those cases, the institutions have already set up care-management programs and teams responsible for keeping people healthy, so when they get the alert, they call and see what’s really going on,” Madan says. “Then, they make that decision of having the patient come in to see a doctor or handle the issue over the phone.”</p>
<p>A major aim of Ginger.io, Madan says, is to use such “big data” analytics to move health care toward preventative measures —&nbsp;intervening before patients end up in emergency rooms, running up medical bills. “Acting on that information while it’s still the right time, when patients can correct their behavior, could certainly change the way we deliver care,” Madan says.</p>
<p><strong>From the lab to “the trenches”</strong></p>
<p>The core technology for Ginger.io is based on research by Madan; Alexander “Sandy” Pentland, the Toshiba Professor of Media Arts and Sciences, who cofounded the company and now serves as an advisor; and several colleagues in the Human Dynamics Lab. They spent years experimenting with “reality mining”:&nbsp;applying algorithms to mobile data to see how people move, behave, and — potentially — spread disease. Shifts in how people use their phones and where they travel, they found, can reflect the onset of a cold, anxiety, or stress. &nbsp;&nbsp;</p>
<p>Eventually, that research sprouted some interesting findings about how people with depression behave when they’re symptomatic, Madan explains. “I realized this was something that could find meaning beyond the 80 people in my study; it could have significant impact on the way we think about health care. That was the driver to start a company,” he says. Madan and Pentland soon recruited a third co-founder, Karan Singh MBA ’11, now Ginger.io’s head of sales and marketing, to help grow the business.</p>
<p>At the time, Madan had already dabbled in MIT’s entrepreneurial ecosystem, having previously entered the $100K Entrepreneurship Competition, which helped him “get over the hump” of building a business by teaching him how to create a business plan. &nbsp;</p>
<p>Taking classes at the MIT Sloan School of Management — such as 15.390 (New Enterprises) — and chatting with mentors from the Venture Mentoring Service and the Martin Trust Center for MIT Entrepreneurship&nbsp;further helped Madan and Singh understand the nuances of building a startup, including recruiting talent, engaging early customers, speaking with investors, and scaling the product.</p>
<p>“After testing the entrepreneurial waters, these resources and classes were fundamental for me personally to make the transition from researcher to entrepreneur,” Madan says. “They provided me with the rich information and network required to be successful.”</p>
<p>Today, Ginger.io has offices in San Francisco and Cambridge, and is poised to grow significantly. In the past five years, Madan says, there’s been an explosion in sensing data (such as wearables), accompanied by a massive shift in the health care system to focus on keeping patients healthy.</p>
<p>“There’s a movement in bringing sensors and new technologies into the clinical workflow, so health care providers can have access to real-time information and react in a timely manner,” he says.&nbsp;</p>
<p>The startup is working “in the trenches,” as Madan likes to say, seeking ways to broadly implement its technology. This involves training health care providers in using the technology and finding how best to manage interventions. Madan says: “We’re looking to really scale through the health care system in the next couple of years.”</p>
Innovation and Entrepreneurship (I&E), Startups, Alumni/ae, Media Lab, Health, Health care, Analytics, Big data, Data, Mobile applications, Mental healthOwn your own datahttp://newsoffice.mit.edu/2014/own-your-own-data-0709
A new system would allow individuals to pick and choose what data to share with websites and mobile apps.Wed, 09 Jul 2014 14:00:00 -0400Larry Hardesty | MIT News Officehttp://newsoffice.mit.edu/2014/own-your-own-data-0709<p>Cellphone metadata has been in the news quite a bit lately, but the National Security Agency isn’t the only organization that collects information about people’s online behavior. Newly downloaded cellphone apps routinely ask to access your location information, your address book, or other apps, and of course, websites like Amazon or Netflix track your browsing history in the interest of making personalized recommendations.</p>
<p>At the same time, a host of recent studies have demonstrated that it’s <a href="http://newsoffice.mit.edu/2013/how-hard-it-de-anonymize-cellphone-data">shockingly easy</a> to identify unnamed individuals in supposedly “anonymized” data sets, even ones containing millions of records. So, if we want the benefits of data mining — like personalized recommendations or localized services — how can we protect our privacy?</p>
<p>In the latest issue of <em>PLOS One</em>, MIT researchers offer one possible answer. Their prototype system, openPDS — short for personal data store — stores data from your digital devices in a single location that you specify: It could be an encrypted server in the cloud, but it could also be a computer in a locked box under your desk. Any cellphone app, online service, or big-data research team that wants to use your data has to query your data store, which returns only as much information as is required.</p>
<p><strong>Sharing code, not data</strong></p>
<p>“The example I like to use is personalized music,” says Yves-Alexandre de Montjoye, a graduate student in media arts and sciences and first author on the new paper. “Pandora, for example, comes down to this thing that they call the music genome, which contains a summary of your musical tastes. To recommend a song, all you need is the last 10 songs you listened to — just to make sure you don’t keep recommending the same one again — and this music genome. You don’t need the list of all the songs you’ve been listening to.”</p>
<p>With openPDS, de Montjoye says, “You share code; you don’t share data. Instead of you sending data to Pandora, for Pandora to define what your musical preferences are, it’s Pandora sending a piece of code to you for you to define your musical preferences and send it back to them.”</p>
<p>De Montjoye is joined on the paper by his thesis advisor, Alex “Sandy” Pentland, the Toshiba Professor of Media Arts and Sciences; Erez Shmueli, a postdoc in Pentland’s group; and Samuel Wang, a software engineer at Foursquare who was a graduate student in the Department of Electrical Engineering and Computer Science when the research was done.</p>
<p>After an initial deployment involving 21 people who used openPDS to regulate access to their medical records, the researchers are now testing the system with several telecommunications companies in Italy and Denmark. Although openPDS can, in principle, run on any machine of the user’s choosing, in the trials, data is being stored in the cloud.</p>
<p><strong>Meaningful permissions</strong></p>
<p>One of the benefits of openPDS, de Montjoye says, is that it requires applications to specify what information they need and how it will be used. Today, he says, “when you install an application, it tells you ‘this application has access to your fine-grained GPS location,’ or it ‘has access to your SD card.’ You as a user have absolutely no way of knowing what that means. The permissions don’t tell you anything.”</p>
<p>In fact, applications frequently collect much more data than they really need. Service providers and application developers don’t always know in advance what data will prove most useful, so they store as much as they can against the possibility that they may want it later. It could, for instance, turn out that for some music listeners, album cover art turns out to be a better predictor of what songs they’ll like than anything captured by Pandora’s music genome.</p>
<p>OpenPDS preserves all that potentially useful data, but in a repository controlled by the end user, not the application developer or service provider. A developer who discovers that a previously unused bit of information is useful must request access to it from the user. If the request seems unnecessarily invasive, the user can simply deny it.</p>
<p>Of course, a nefarious developer could try to game the system, constructing requests that elicit more information than the user intends to disclose. A navigation application might, for instance, be authorized to identify the subway stop or parking garage nearest the user. But it shouldn’t need both pieces of information at once, and by requesting them, it could infer more detailed location information than the user wishes to reveal.</p>
<p>Creating safeguards against such information leaks will have to be done on a case-by-case, application-by-application basis, de Montjoye acknowledges, and at least initially, the full implications of some query combinations may not be obvious. But “even if it’s not 100 percent safe, it’s still a huge improvement over the current state,” he says. “If we manage to get people to have access to most of their data, and if we can get the overall state of the art to move from anonymization to interactive systems, that would be such a huge win.”</p>
<p>“OpenPDS is one of the key enabling technologies for the digital society, because it allows users to control their data and at the same time open up its potential both at the economic level and at the level of society,” says Dirk Helbing, a professor of sociology at ETH Zurich. “I don’t see another way of making big data compatible with constitutional rights and human rights.”</p>
Media Lab, Data, Reality mining, Big data, Privacy, Research, Computer science and technology, Electrical engineering and computer scienceA &quot;maker&quot; education http://newsoffice.mit.edu/2014/a-maker-education-0708
NuVu Studio takes high school students out of the classroom and into a design space to invent and create. Tue, 08 Jul 2014 00:00:02 -0400Rob Matheson | MIT News Officehttp://newsoffice.mit.edu/2014/a-maker-education-0708<p>Down an alley off Massachusetts Ave. in Cambridge, there’s a “maker space” called <a href="https://cambridge.nuvustudio.com/studios/about#tab-studio-model">NuVu Studio</a>, where local high school students leave their classrooms behind to design robots, websites, board games, medical devices, and clothing, among other things. But they’re not playing hooky —&nbsp;in fact, it’s part of their education.</p>
<p>The brainchild of MIT alumnus Saeed Arida PhD ’10, NuVu (pronounced “new view”) enrolls students from local schools —&nbsp;both during the academic year and the summer — to focus on real-world projects. In so doing, they’re exposed to the collaborative, experimental, and demanding&nbsp;design process typical of architectural design studios.</p>
<p>“We walk students through a rigorous process to get to this real, final product,” says Arida, who modeled NuVu after design studios in MIT’s School of Architecture and Planning, where he studied design and computation and taught several studios.</p>
<p>Over the course of 11 weeks, students choose to attend a selection of <a href="https://cambridge.nuvustudio.com/discover#">two-week studios</a> under themes such as “science fiction,” “health,” “home of the future,” or this summer’s theme, “fantasy.” Sometimes, studios even bring students to international destinations — such as India and Brazil —&nbsp;for research.</p>
<p>During studios, NuVu coaches present students with real-world problems to solve; the coaches include full-time employees and local experts such as doctors, engineers, and graduate students from MIT and Harvard University.</p>
<p>A brief research period gives way to the bulk of the two-week studio — the rigorous design process —&nbsp;that includes prototyping, critiques from the coach, and constant documentation of progress. Students have full use of NuVu’s equipment, including 3-D printers, designing software, art and photography equipment, and other machines.</p>
<p>At the end of each studio, students present finished projects to guest experts —&nbsp;including professors, practitioners, entrepreneurs, and designers — for evaluation. The rapid design process is “intense,” but beneficial, Arida says.</p>
<p>“Students come in at the beginning of two weeks, and it’s all sketches and scraps of paper. They come out at the end of two weeks and you see results,” Arida explains. “We have this culture here, where you can have an idea, but if you don’t go through this rigorous process, you have nothing.”</p>
<p>Students hail from partner schools around the Boston area, including Beaver Country Day School in Brookline, Phoenix Charter Academy in Chelsea, and Inly School in Scituate.</p>
<p>Co-founded with Saba Ghole SM ’07 and David Wang,&nbsp;a PhD student in MIT’s Computer Science and Artificial Intelligence Lab, NuVu brought in about 150 students last year. Around 400 students have participated in the studio, creating more than <a href="https://cambridge.nuvustudio.com/discover">130 projects</a> including robotic arms, modular shelters, sustainable and futuristic clothing, documentaries about Boston’s Tibetan population, and strategy games.</p>
<p>Such programs are difficult to implement broadly, Arida admits, and private institutions tend to favor them, rather than public schools. But this fall, NuVu is entering its first public-school partnership, with Cambridge Rindge and Latin School, which will send 10 students for the entire semester — and those 10 students will earn credit. It’s a step in the right direction, Arida says.</p>
<p>“Within a year, we hope to have three to five centers opening in different places in United States and internationally,” he adds. “There’s been a lot of interest from people in this educational model.”</p>
<p><strong>Model education</strong></p>
<p>NuVu arose from Arida’s 2010 PhD dissertation, which suggested that architectural design studios train youth in “learning by doing” at an accelerated pace. The idea is that in design studios, students spend the bulk of the semester focused on building one project over multiple iterations, receiving constant feedback from professors and other students.</p>
<p>For a case study, Arida approached Beaver Country Day School, which allowed 20 students to participate in a pilot program on its campus, with two-week studios that centered on alternative energy, balloon mapping, interactive music, and filmmaking. &nbsp;</p>
<p>Among other things, Arida’s dissertation suggested that NuVu’s model could help students, in a two-month timespan, understand complex systems and recognize the importance of rapid prototyping. It also helped inform NuVu’s current model of combining instructors with mixed backgrounds —&nbsp;such as pairing a filmmaker with an engineer, or a doctor with an architect —&nbsp;for more effective teaching.</p>
<p>After the pilot, with help from an advisory board of Media Lab instructors —&nbsp;including Edith Ackermann, a visiting researcher; Joost Bonsen, a lecturer; and George Stiny, a professor of computation and design —&nbsp;NuVu set up a second program in Kendall Square with 25 more Beaver students. In 2012, NuVu relocated to its current headquarters.</p>
<p>For students like Liam Brady, a recent Beaver graduate, going from a classroom setting to NuVu was like “night and day.”</p>
<p>“With studio-based learning, we can see the application,” says Brady, who in a 2011 studio designed an interactive floor projection for playing games such as soccer. “As a result, I tend to remember things I learned in a studio environment rather than a classroom environment.”</p>
<p>Brady is one of 10 NuVu alumni returning as interns this summer; another is Cambridge School of Weston junior Harper Mills. As part of a NuVu studio, Mills traveled to Rio de Janeiro for 10 days to research urban inclusivity, and designed an interactive website that laid out prominent challenges facing the city.</p>
<p>For Mills, who enrolled in many studios over the course of an entire year, one benefit of the studio was the time pressure. “You’re constantly asking yourself if you’re being as productive and efficient as possible, and you’re also forced to be self-driven,” she says. “There is no bell or strict schedule moving you through your day, just you and your determination to create the best product you possibly can.”</p>
<p>Another perk, she says, is the iterative design process, where “failure is an integral part of success.”</p>
<p>“In school you get one shot and if you blow it, then that's the end,” Mills says. “At NuVu, when you show the first iteration of your project, it’s to say, ‘I know this isn’t great; how can I make it better?’”</p>
<p><strong>Rich portfolios</strong></p>
<p>At NuVu, students aren’t graded. “But they end up with a really rich portfolio,” Arida says. “This is important, as portfolios are increasingly becoming integral to the college application process.”</p>
<p>Some projects have found life outside the studio. An interactive music installation one group of students helped design with an MIT architecture student is now on display at the MIT Museum.&nbsp;Other students produced animations explaining social-media phenomena including “selfies” and the “deep Web” that were presented at a conference on youth and media at Harvard.</p>
<p>Last winter, two students created a medical device that expands and improves upon research to reduce tremors caused by Parkinson’s disease. The device measures, in real-time, the frequency and amplitude of a patient’s tremors, creating and sending a feedback signal to the brain that helps suppress the tremors. Being developed further by Wang, the device will begin clinical trials at Beth Israel Hospital this summer.</p>
<p>And in a recent “do-it-yourself prosthetics” studio, two students developed a 3-D printed “artistic” prosthetic hand for children under age 12. Using an online open-source design called “Robo Hand,” they built a hand with interchangeable cylinders to fit a brush, pencils, and other artistic utensils.</p>
<p>Now the student inventors are organizing prosthetic design events next fall at NuVu, where students creating prostheses can display their inventions at the studio and share their knowledge.</p>
<p>“You can really see the impact on these kids,” Arida says. “It’s phenomenal.”</p>
Innovation and Entrepreneurship (I&E), Startups, Alumni/ae, Architecture, School of Architecture + Planning, Invention, Design, Education, teaching, academics, maker movementWhy networking doesn&#039;t workhttp://newsoffice.mit.edu/2014/why-networking-doesnt-work
New study reveals the strength of the strongest ties in collaborative problem solving. Tue, 24 Jun 2014 16:30:01 -0400Alexandra Kahn | MIT Media Labhttp://newsoffice.mit.edu/2014/why-networking-doesnt-work<p>In a <a href="http://www.nature.com/srep/2014/140613/srep05277/full/srep05277.html">study</a> published recently in&nbsp;<em>Nature Scientific Reports</em>, MIT Media Lab&nbsp;researchers showed that networking does not improve team performance. Their findings&nbsp;showed that only the participants' strongest&nbsp;ties had an actual effect on their performance — and the stronger the ties a team had,&nbsp;the better the team performed. None of the&nbsp;participants' weak instrumental (goal-oriented) or expressive (personal) networking ties significantly impacted the&nbsp;performance of their teams.</p>
<p>The research further showed that a team's strongest ties were the&nbsp;best predictor of its performance. A team's strongest ties&nbsp;indicated a performance better than the technical abilities of its&nbsp;members, what the members already knew on the topic, or their&nbsp;personality types.</p>
<p>When solving problems in a competitive environment, the study revealed, it does not matter how many people someone knows or networks with — what really matters are the&nbsp;strongest ties in the network. This has implications for the organization of teams of scientists, engineers, and a host of others tackling today's most complex problems.</p>
<p>The paper, "<a href="http://www.nature.com/srep/2014/140613/srep05277/full/srep05277.html">The Strength of the Strongest Ties in Collaborative Problem Solving</a>," was published in the June 20 issue of&nbsp;<em>Nature Scientific Reports</em>&nbsp;and co-authored by graduate students Yves-Alexandre de Montjoye and Arkadiusz Stopczynski; postdoc Erez Shmueli; Alex Pentland, the Tobisha Professor of Media Arts and Sciences; and Sune Lehmann, a professor at the <span class="st">Technical University of Denmark</span>.</p>
Media Lab, CollaborationMedia Lab to bring more digital tools into newsrooms with $1.2 million Knight Foundation grant http://newsoffice.mit.edu/2014/media-lab-bring-more-digital-tools-newsrooms-12-million-grant
The Future of News initiative aims to bridge the gap between journalism, technology, and civic engagement.
Mon, 23 Jun 2014 13:30:01 -0400Alexandra Kahn | MIT Media Labhttp://newsoffice.mit.edu/2014/media-lab-bring-more-digital-tools-newsrooms-12-million-grant<p>With a new $1.2 million grant from the John S. and James L. Knight Foundation, the MIT Media Lab will help develop and identify new technologies for newsrooms, especially digital tools for community engagement. The funding will also support ongoing work at the MIT Center for Civic Media, which brings together innovators, researchers, and journalists.</p>
<p>Under the Future of News initiative, the Media Lab will apply its research to a range of challenges faced by newsrooms today, including engaging audiences more deeply on issues affecting their communities. The Media Lab will develop technology, for instance, that helps reporters track local needs — such as a tool to analyze conversations from public safety scanners, or software that allows TV broadcasters to share links to relevant stories during live political debates. Graduate students focusing on news technologies will contribute to the work.</p>
<p>“The MIT Media Lab can help to craft the future of news by providing journalists with tools they need to better inform the public and spark civic engagement,” said Michael Maness, the Knight Foundation's vice president for journalism and media innovation. “Great strides have been made towards this goal by the Media Lab and Center for Civic Media in recent years; by bringing more of these tools and lessons to newsrooms we can put this work to action.”</p>
<p>“Technology is disrupting both the business model and the editorial process of journalism. Turning this disruption into opportunity requires the fusion of real newsroom experience and experience with the new emerging technology and forms of social media and civic action,” said Joi Ito, director of the Media Lab and a Knight Foundation trustee. “We expect our Future of News initiative to bring these elements together and impact the future of journalism in a positive way.”</p>
<p>As part of the initiative, the Media Lab will establish stronger relationships with newsrooms and hold networking events for news organizations and researchers to find ways to implement new tools. To this end, Bloomberg LP today announced that it will join the Future of News initiative as a founding member. Bloomberg is focused on exploring the intersection of journalism, technology, and data, and discovering ways to provide news for audiences around the world. In December, Bloomberg will host a Media Lab conference on this topic at its headquarters in New York.</p>
<p>“As technology and changing consumption habits continue to disrupt all facets of the news business, there’s no better time for MIT’s Future of News initiative," said Justin B. Smith, CEO of Bloomberg Media Group. “At Bloomberg, we are currently reimagining our entire operation to build the leading, next generation media company for global business, so collaborating with MIT Media Lab was a natural fit and we’re thrilled to be a founding member.”</p>
<p>The funding will also extend the Knight Foundation’s relationship with the Center for Civic Media for one year. The center — which develops and implements tools to help journalists and communities empower citizens toward civic engagement — plans to release a number of promising tools that will inform communities and help spur interest in civic issues.</p>
<p>“What does it mean to be a citizen in the digital age? This new funding will further our work on that answer by allowing us to create tools that help citizens influence and shape their communities not just at election time but every day,” said Ethan Zuckerman, director of the center.</p>
<p>A joint effort between the Media Lab and the Comparative Media Studies Program, the center, in 2012, became the first-ever large grant recipient of the Knight News Challenge, a contest that funds innovative ideas in news and information. Since then, the center has collaborated with the Knight Foundation to accelerate media innovation and expand community engagement, eventually partnering on the annual MIT-Knight Civic Media Conference.&nbsp;</p>
Media Lab, Center for Civic Media, Media, Comparative Media Studies/Writing, Grants, Journalism&#039;This is not a ball&#039; spotlights MIT collaborationhttp://newsoffice.mit.edu/2014/this-is-not-a-ball
New documentary by visiting artist Vik Muniz features MIT alumni and researchers. Fri, 20 Jun 2014 12:30:01 -0400Anya Ventura | Center for Art, Science & Technologyhttp://newsoffice.mit.edu/2014/this-is-not-a-ball<p>MIT alumni and researchers are spotlighted in visiting artist Vik Muniz’s newest documentary, “This is Not a Ball,” which explores the scientific and cultural meanings of the soccer ball.</p>
<p>In the documentary, the Brazilian artist interviews various people — from Japanese kemari players to Pakistani factory workers — to answer the question: "Why do we play and why are we so drawn to the ball?" The documentary premiered June 13 on Netflix to coincide with the 2014 World Cup.</p>
<p>Featured prominently in the film is MIT alumnus Marcelo Coelho SM '08, PhD '13, an artist and designer who graduated from the Fluid Interfaces Group at the MIT Media Lab. Coelho headed up the scientific research for the film, helping weave together a multifaceted narrative about an object (a ball) that cuts across traditional disciplinary borders. “Spheres are everywhere,” Coelho says, “from molecules to planets.” For this task, he connected Muniz with fellow MIT researchers Edith Ackermann and Skylar Tibbits SM '10, who discussed developmental psychology and programmable materials with Muniz, respectively.</p>
<p>Coelho and Muniz’s collaboration began in 2012 when Coelho brought Muniz to campus through MIT’s Visiting Artist program, which every year embeds leading artists into the rich educational and research culture of the Institute. Coelho first discovered Muniz’s art in the course of his research. “I was doing my work on materials and human-computer interaction, researching how people use materials and make meaning from them,” he recalls. In doing so, he came across Muniz’s images, crafted from unlikely, everyday substances, such as dust, chocolate, and even industrial garbage.</p>
<p>At MIT, Coelho worked with Muniz to etch a millimeter-wide image of a sand castle on a grain of sand through the use of a focused ion beam and scanning electron microscope. The resulting prints were enlarged to debut in a comprehensive exhibition of Muniz’s work at the Tel Aviv Museum of Art. The experience, as it turned out, would produce a long-lasting collaboration. “Vik is really inspired by MIT,” Coelho says, “and that's how the true collaborations ended up happening. In some ways, he could have equally been a scientist. He’s incredibly curious and being at MIT through the Visiting Artist program allowed him to exercise that side of his thinking.” &nbsp;</p>
<p>Coelho also helped orchestrate the film’s defining moment: creating an art piece made of more than 10,000 soccer balls on the field of Mexico’s Azteca stadium. Forming an enormous ball, the entire installation took about a week to create, Coelho says, and involved designing the necessary software to figure out how many balls were needed and where they should go. “It’s funny,” he says, “this was the biggest scale project I’ve worked on, and the sandcastle was the smallest.” &nbsp;</p>
Films, Media Lab, Alumni/ae, Center for Art, Science & TechnologyMIT’s Mobile Fab Lab participates in White House Maker Fairehttp://newsoffice.mit.edu/2014/mits-mobile-fab-lab-at-white-house-maker-faire-0620
Obama tours MIT-developed trailer containing digital fabrication, design, and manufacturing tools.Fri, 20 Jun 2014 10:15:06 -0400News Officehttp://newsoffice.mit.edu/2014/mits-mobile-fab-lab-at-white-house-maker-faire-0620<p>MIT’s Mobile Fab Lab — a trailer containing digital fabrication, design, and manufacturing tools, along with an electronics workbench — was on hand Wednesday for the first-ever “<a href="http://www.whitehouse.gov/maker-faire">White House Maker Faire</a>,” hosted by President Obama and the Office of Science and Technology Policy (OSTP) at the White House.</p>
<p>The colorful mobile factory, a blue biofuel-powered sports car, and an electronic giraffe were along the exhibits displayed on the South Lawn, in addition to more than 30 exhibits on display inside the White House —&nbsp;representing a total of more than 100 “makers” whose work ranged from whimsical to practical.</p>
<p>At the Maker Faire, President Barack Obama met with students, entrepreneurs, and citizens who are using new fabrication tools and techniques to launch businesses; learn skills in science, technology, engineering, and math (STEM); and reinvigorate American manufacturing.&nbsp;Before an audience that included business leaders, mayors, and heads of nonprofit organizations, the president announced new steps the administration and its partners will take to support Americans’ access to these tools and techniques, and proclaimed the day as a “<a href="http://www.whitehouse.gov/the-press-office/2014/06/17/presidential-proclamation-national-day-making-2014">National Day of Making</a>.”</p>
<p>The Mobile Fab Lab — a graffiti-decorated trailer — was among MIT’s contributions to the daylong celebration of making, innovation, and creativity. The trailer is a mobile fabrication laboratory that contains a suite of digital fabrication design and manufacturing tools and a full electronics workbench, allowing anyone to make (almost) anything.</p>
<p>Obama stopped by the Mobile Fab Lab for a briefing on digital fabrication and the future of manufacturing with Neil Gershenfeld, director of MIT’s <a href="http://cba.mit.edu/">Center for Bits and Atoms</a> (CBA); Nadya Peek, one of his graduate students, who is working on machines that make machines; and Makeda Stephenson, from Boston’s first fab lab. Visitors to the lab included John Holdren, assistant to the president for science and technology and director of the OSTP, and two physicists who serve in Congress: Reps. Rush Holt (D-N.J.) and Bill Foster (D-Ill.), who has introduced a House bill to charter a national network of fab labs based on CBA’s fab labs.</p>
<p>MIT was well represented during Wednesday’s event: Jose Gomez-Marquez, an instructor and research specialist at MIT's International Design Center and Little Devices Lab, presented on the&nbsp;<a href="http://makernurse.org">MakerNurse</a>&nbsp;project, which aims to connect creative nurses from across the country and provide prototyping tools to support nurse-led ideas for new medical devices.&nbsp;Sandra Richter, a visiting researcher at the MIT Media Lab, met with Obama as he tried out her solar-powered charging bench in the Rose Garden. Gershenfeld’s former student Manu Prakash SM ’05, PhD ’08 showed the president a $5 chemistry set for kids and an origami-based paper microscope that costs less than $1; alumni Jay Silver SM ’08 and Saul Griffith SM ’01, PhD ’04 were also on hand.&nbsp;</p>
<p>The Mobile Fab Lab is part of a global educational outreach project based at the Center for Bits and Atoms. The mobile facilities are technical platforms for STEM education, workforce development, and business-idea prototyping.</p>
<p>CBA, founded in 2001 with support from the National Science Foundation, is an interdisciplinary initiative exploring the boundary between computer science and physical science. CBA studies how to turn data into things, and things into data. It manages facilities, runs research programs, supervises students, works with sponsors, creates startups, and conducts public outreach and educational projects.</p>
<p>CBA’s involvement in the Maker Faire builds on a workshop, run last year with OSTP, on the emerging science of digital fabrication, where both materials and designs are digitized. CBA’s research and facilities on campus are complemented by its fab labs, which bring rapid-prototyping capabilities to underserved communities. While originally intended for outreach, fab labs are also used by entrepreneurs to develop businesses, and have increasingly been adopted by schools as platforms for project-based, hands-on STEM education: By designing and creating objects of personal interest, students learn about the machines, materials, design process, and engineering that goes into invention and innovation.</p>
<div class="cms-placeholder-content-video"></div>
<p>With the rapid growth of the “maker movement” — a rising culture emphasizing do-it-yourself hardware and technology — fab labs are growing in prominence as a component of entrepreneurship and of developing the nation’s future workforce and its capacity for advanced manufacturing.</p>
<p>The first fab lab was deployed by MIT faculty and students in rural India in 2002. Since then, similar facilities have been placed in Norway, South Africa, Ghana, Kenya, Iceland, Australia, Japan, New Zealand, Europe, the United States, and elsewhere — building a global community of people who want to collaborate and share knowledge. Today there are more than 380 fab labs in the world, with at least an equal number under development in more than 40 countries.</p>
<p>To keep up with the growth of this international network, in 2009 CBA spun off the Fab Foundation, to provide operational support for the program, and a Fab Academy for distributed education in the labs. CBA continues to collaborate with partners on establishing new fab lab sites.</p>
<p>At this week’s White House Maker Faire, Obama announced a number of commitments by agencies, corporations, and cities and states to the maker movement, and to support innovation and STEM education. One of those commitments came from Chevron Corp., which announced a $10 million grant to the Fab Foundation to build fab labs in areas of the United States where it operates.</p>
President Barack Obama talks with Neil Gershenfeld, director of MIT’s Center for Bits and Atoms, as MIT graduate student Nadya Peek, right, looks on, during this week's White House Maker Faire. MIT's Mobile Fab Lab can be seen in the background.Center for Bits and Atoms, maker movement, 3-D printing, STEM education, Students, Invention, Fabrication