If you’re a science fiction lover who can’t get enough of Mr. Robot and Westworld and worry that robots might one day make us their slaves, the good news is that it’s not likely to happen anytime soon, but technology that falls into the wrong hands needs to be considered. That was the consensus of a discussion on artificial intelligence (AI) last Friday at the World Science Festival at New York University (NYU).

The panelists suggested—each in their own way—that AI isn’t as dangerous or potentially harmful as advertised. Tse made the point that Siri, Alexa, and Google are not yet on the same level as human intelligence. He drew a distinction between “artificial narrow intelligence” and “artificial general intelligence,” explaining that narrow AI would be like a robot learning how to fly a plane or drive a car, while general AI would include knowledge on how to do those tasks—but also mow the lawn, babysit children, cook dinner, and still learn new other skills.

Creativity was used as an example to help explain the limits of AI. A video of a scene from a robot-written screenplay, based on input from hundreds of other screenplays fed into a computer, demonstrated a lack of human emotion and depth, and was comical. LeCun offered up the example of jazz as something that AI could attempt but never truly duplicate in any meaningful form.

LeCun also said that the desire to take over the world “should not be associated with intelligence.” People’s fear of AI mainly stems from films and TV shows that envision a worst-case scenario “because movies are more interesting when bad things happen,” LeCun said. “But most movies get it completely wrong.” He singled out Her as a rare example of a film getting it right.

AI and robots certainly have their positives, the panelists were quick to point out. From the use of GPS to Google Translate to fraud detection systems to a hundred other every-day tasks we now take for granted, living without the advances made by AI over the last 20 years would take a massive leap backwards. Looking forward, it was also suggested that “robot doctors” might someday replace physicians and, that in the near future, children will ask their parents, “Do you mean an actual human diagnosed you when you were sick?” But Tse countered with: “How can a system that never felt pain understand pain?”

Schneider is also the 2019 Distinguished Scholar at the Library of Congress. Photo: World Science Festival/Greg Kessler

That led to more of a philosophical discussion about how AI relates to consciousness, which Urban called “the biggest debate in AI.” Schneider defined consciousness as “quality of subjective experience—the richness of a sunset or the smell of your morning coffee.” Tse defined consciousness as “seeing what is there; it is imagination; it is something.”

“When a heat-seeking missile is chasing after you, no one is going to care whether it’s conscious or not,” explained Tegmark, adding that “consciousness is philosophical BS.”

The mention of AI unleashing a missile that could potentially wipe out humankind brought the discussion full circle. LeCun and Tegmark said this was already on the radar (so to speak) of scientists in the AI community. They both had already signed a petition regarding “AI and evil purposes.” We need to build machines that understand our goals and know the difference between good and evil as we go forward, they agreed.

“After this panel,” joked Schneider. “I’m actually more afraid of the possibility of bad things.”

For continued coverage of events from World Science Festival, check back for more posts on topics from the microbiome, to music and the brain, and fundamentalism.