A star-studded lineup helps the Institute celebrate the launch of a new initiative on human and machine intelligence.

“In the history of science and technology, there are moments of opportunity,” MIT President L. Rafael Reif told a packed Kresge Auditorium on March 1. “Moments when the tools, the data, and the big questions are perfectly in sync. In the field of intelligence, I believe this is just such a moment.”

MIT faculty and friends helped the Institute celebrate the launch of a new initiative on human and machine intelligence, with a star-studded lineup of speakers from the interlocking realms of artificial intelligence, cognitive science, neuroscience, social sciences, and ethics.

“We are auguring in the Age of Intelligence right here,” said Eric Schmidt, the former executive chairman of Google’s parent company, Alphabet, as he joined Reif on stage. Schmidt and his wife provided financial support for the project’s first year. Google also donated funds to advance MIT student research in human and artificial intelligence.

“I think MIT is uniquely positioned to do this. I think you can turn Cambridge into a genuine AI center,” said Schmidt, an MIT Innovation Fellow and founding advisor to the MIT Intelligence Quest.

With MIT’s 200 or more intelligence researchers and culture of “compulsive curiosity,” the MIT Intelligence Quest will thrive on campus, said Anantha Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science.

“It will thrive because, when MIT people have their teeth in an interesting problem, they instinctively reach out across disciplines to solve it,” he said. “It will thrive because we can offer it a continuous flow of fresh minds and fresh thinking.”

The time is ripe to “crack the code of intelligence” with a combination of neuroscience, cognitive science, and computer science, said MIT alumnus David Siegel SM ’86, PhD ’91, also a founding advisor to the MIT Intelligence Quest. He envisions the Intelligence Quest carrying on the spirit of MIT’s AI Lab at Tech Square in the 1980s, which spawned the building blocks of the Internet, RSA encryption, and the foundations AI and robotics.

“From the start of our history, we have been trying to grasp how the mind gives rise to intelligence,” said Siegel, co-chairman of Two Sigma Investments. “To truly understand it, I believe we need to get back to the basic science and also frame the question in engineering terms. The time to start is now. Our objectives are ambitious. But given MIT’s long history of tackling big problems, we must try. After all, if not us, then who?”

It is time to drive some breakthroughs in AI together, said MIT alumnus Xiao’ou Tang PhD ’96, the founder of SenseTime, a leading AI company in China, which has partnered with the MIT Intelligence Quest.

“Together we will definitely go beyond deep learning, go to the uncharted territory of deep thinking,” said Tang, a professor of information engineering at the Chinese University of Hong Kong.

Getting to the Core

The morning sessions of the MIT Intelligence Quest launch event were designed to mirror the two principal entities that will make up the Intelligence Quest itself: the Core and the Bridge. The Core will advance the science and engineering of both human and machine intelligence. The Bridge will be dedicated to the application of MIT discoveries in natural and artificial intelligence to all disciplines.

A passion for the MIT Intelligence Quest itself — the puzzles, the breakthroughs, the careful work — marked the research presentations for the Core. Pacing the stage in an animated TED-style talk, James DiCarlo, head of the Department of Brain and Cognitive Sciences, embodied this passion.

“My colleagues and I see a tremendous new opportunity for synergy. The science quest to understand human intelligence is one of the most exciting frontiers of our field: the quest to understand ourselves. And it’s aligned with the engineering quest of developing intelligent systems,” said DiCarlo, the Peter de Florez Professor of Neuroscience.

Then he shared an observation that was made in colorful ways throughout the day: The possibilities for discovery are great, but right now, “we are still very far from real AI.” The human brain is far superior to any existing form of artificial intelligence, which is why, he said, “as scientists, we have the opportunity — and obligation — to reverse engineer this brain machine.”

Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, showed the same passion expressed from a different perspective: “Right now, the vast majority of AI algorithms are driven by mathematics and physics. When we learn more about the human brain, we will be able to develop nature-inspired algorithms.”

“Why science and engineering?” asked Tomaso Poggio, director of the Center for Brains, Minds, and Machines and the Eugene McDermott Professor of Brain and Cognitive Sciences. He traced major developments in machine learning to neuroscience. The Core will feature projects that require both science and engineering, which is a good thing because “ideally we want to make ourselves and our brains more intelligent than the machines we are building.”

A craving for data

Teaching machines to see and hear was the focus for Antonio Torralba, MIT director of the MIT-IBM Watson AI Lab and a professor of electrical engineering and computer science. At one point, he showed a video of a child happily listening to a storybook but bursting into shrieks when the reading stopped. “Just like machines, kids also need a lot of data,” Torralba said, with a smile. “And they don’t like it when you stop giving them data.”

Indeed, look to the playground for the intelligence platform you see, said Laura Schultz, a professor of cognitive science. Children learn concepts naturally. They have sophisticated social cognition and an intelligence that can “see past what things actually are to see what they might mean, or become.”

“This kind of intelligence might seem almost unimaginably far away,” she said. “But if we are going to succeed at engineering it, we first have to understand it, and the good news is that we actually have a platform like this already here at MIT, at the daycare.”

Rebecca Saxe, a professor of cognitive neuroscience, showed a baby watching a movie in an MRI machine. “We took the first ever MRI images of a baby’s brain while he looked at faces,” she said. “And we discovered something remarkable.” Their results revealed an organized pattern of brain activity develops very early, with regions of the infant brain more active when babies look at faces.

The value of the child intelligence system is not lost on Josh Tenenbaum, a professor of computational cognitive science. “Children are the only system in the known universe that demonstrably, reliably, reproducibly, builds human-level intelligence. So why not build AI this way? Why haven’t we yet?” he asked. “I think the reason is that only now do we have a scientific field studying how children learn and think that is mature enough to offer guidance for AI.”

Building a humanistic bridge

In the Bridge session, speakers detailed projects that highlighted the remarkable potential benefits of AI: social robots that help children learn and engage the depressed, algorithms that can predict and prevent cancer, Wi-Fi signals that detect when elderly people fall, even algorithms that build personalized investment portfolios.

James Collins, the Termeer Professor of Medical Engineering and Science, talked about programmable cells and the world of possible applications in various realms: medicine, energy, environment, and agriculture.

“I also have a vision for AI,” said Cynthia Breazeal, associate professor of media arts and sciences, next. “I envision an AI that helps us to be smarter and more productive and to flourish — and heightens the ability for people to deeply connect.”

“AI needs to be able to engage our social and emotional selves in addition to our cognitive selves,” she added, as people watched film of elderly people with Jibo, a social robot that she designed. It danced, cooed, and looked with friendly curiosity at them.

Regina Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science, pointed to a map of the world covered in red markings that indicate deaths from cancer. “I firmly believe with all of our strengths in machine learning and connections we really have a chance to wipe the red from this map.” Her own work in machine learning is making strides toward that goal.

“I want you to imagine with me a home of the future where the home will monitor your health,” said Dina Katabi, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, drawing listeners into a presentation on how AI enables the home to monitor the physical and mental health of its inhabitants using Wi-Fi signals.

In a presentation playfully titled, “Artificial Intelligence, Artificial Stupidity, and Financial Markets,” Andrew Lo, director of the MIT Laboratory for Financial Engineering and the Charles and Susan T. Harris Professor, described algorithms that factor in unproductive human actions that impact financial markets: loss aversion, overconfidence, and overreaction. “We don’t need artificial intelligence so much as artificial humanity,” he said.

Marin Soljacic, a professor of physics, capped the Bridge session off by talking about how AI processing will improve with optical neural networks. “We’re talking about nearly instantaneous execution, much higher frequencies than electronics, ultra low power consumption!” The crowd shared his enthusiasm.

The consequences: intelligence and society

“In my estimation AI is going to touch all these industries: energy, advance manufacturing, space, advanced materials, life science and biotech, internet of things,” said Katie Rae, CEO and managing partner of The Engine, which bridges the gap between discovery and commercialization by empowering disruptive technologies with the long-term capital, knowledge, and specialized equipment and labs they need to thrive.

“What does it mean for us to build machines that can think?” asked Melissa Nobles, the Kenan Sahin Dean of the MIT School of Humanities, Arts, and Social Sciences, during the panel discussion, “The Consequences: Intelligence and Society.”

“What are the social, economic, political, artistic, ethical, and spiritual consequences of trying to make what happens in our minds happen in a machine? Who does this machine answer to?” asked Nobles, a professor of political science.

Gideon Lichfield, editor-in-chief of MIT Technology Review, moderated the discussion, which delved into AI’s potential dark side, exploring issues such as the impact on jobs and the economy, algorithmic bias, and the unchecked power of private industry.

“We need thoughtful folks to really put their values into the system and pay mindful attention,” said MIT alumna Megan Smith ’86, SM ’88, a former U.S. chief technology officer and a former vice president at Google. Pointing to her shirt, which read “Computer Science for All,” Smith said all school children should learn coding and design thinking. “It’s about confidence. Part of the future of work is including everyone in developing solutions,” said Smith, founder and CEO of shift7.

Dario Gil, vice president of AI and quantum computing at IBM, said AI technologies draw on such large and pre-existing data sets, it’s more difficult for people to recognize the misuse of variables such as race, age, and gender. “It becomes more opaque,” he said.

“I’d like to talk about job displacement,” said Rodney Brooks, a former director of the Computer Science and Artificial Intelligence Laboratory. “We don’t have any capability of robots interacting with people. Who is going to do the physical tasks?” asked Brooks, the MIT Panasonic Professor of Robotics Emeritus.

And guidance from a reliable government would be welcome, said several panelists, including Joi Ito, director of the MIT Media Lab. “I think we can look to countries that have functional democracies to see how they are starting to grapple with some of these social questions,” he said.

“We haven’t had enough human intelligence to go with machine intelligence,” added Daron Acemoglu, the Elizabeth and James Killian Professor of Economics. “The real promise of machine-human intelligence is to create jobs that are higher paying and more pleasant and that leave greater room for people to develop their creativity. The application of digital technology can do this — but we need to step back and develop it the right way.”

“It is about the types of artificial intelligence we create,” Acemoglu added. “And it’s about getting a broader set of people working on developing it.”

The event wrapped up at the Media Lab with a student poster session that included projects focused on communication: humans, robots, AI; algorithms of AI; physics, engineering, and security; and vision and language.