QUANTUM AND AI TIDBITS PART 2

WHAT A challenging time for artificial intelligence! Quantum computers have proven their efficacy by running millions of test programs. Yet there are fundamental questions remaining about AI. Here are two.

Bergstein notes that tremendous advances in AI have been accomplished with machine learning, by amassing patterns and optimizing them by comparing one with another, all at break-neck pace. As an example, Google DeepMind has already developed AlphaGo, an AI that plays Go better than any human might.

However, Bergstein says of such machine learning, “It has no idea it’s playing Go as opposed to golf…. When you ask Amazon’s Alexa to reserve you a table at a restaurant you name, its voice recognition system, made very accurate by machine learning, … doesn’t know what a restaurant is or what eating is. If you asked it to book you a table for two at 6 p.m. at the Mayo Clinic, it would try.”

In the infancy of AI sixty years ago, a goal was to give computers the power to think. However, Bergstein cites Common Sense, the Turing Test, and the Quest for Real AI, a book by Hector J. Levesque, a computer scientist at the University of Toronto. Levesque contrasts machine learning with GOFAI, short for “good old fashioned artificial intelligence,” as perceived by its early researchers.

GOFAI, briefly, would require imbuing computers with common sense and an awareness of the real world’s ideas and beliefs. Bergstein cites a Levesque question: “How would a crocodile perform in a steeplechase?”

A human’s answer would be easy: “Badly,” based on nothing more than common sense and human awareness. By contrast, a machine-learning AI would analyze scads of “crocodile” references and scads of “steeplechase” references, including Levesque’s, Bergstein’s, and now maybe even this SimanaitisSays item. It might conclude, without knowing why, that the croc wouldn’t do very well.

Bergstein notes, “You would have used a flawed and brittle method that is likely to lead to ridiculous errors.” I recall the researcher who noted that such “deep-learning machines are still capable of mistaking turtles for rifles….”

Simonite warns, “Making subtle changes to images, text, or audio can fool these systems into perceiving things that aren’t there.”

In his article, Simonite cites that in January a leading machine-learning conference announced it had selected 11 new papers dealing with detecting and defending against such hallucinatory attacks. “Just three days later,” Simonite says, “first-year MIT grad student Anish Athalye threw up a webpage claiming to have ‘broken’ seven of the new papers, including from boldface institutions such as Google, Amazon, and Stanford.”

The website offers “adversarial examples,” loosely, optical illusions deceiving machine-learning software. The image of a tabby cat is perturbed only slightly, but just enough so that it “fools an inceptionV3 classifier into classifying it as ‘guacamole.’ ” According to Athalye, such hallucinations are “easy to synthesize….”

Simonite writes, “Human readers of WIRED will easily identify the image below, created by Athalye, as showing two men on skis. When asked for its take Thursday morning, Google’s Cloud Vision service reported being 91 percent certain it saw a dog.”

I see two skiers. How about you? Shown are Google Cloud Vision’s perceptions. Image from Wired, March 8, 2018.

Other hallucinations, notes Simonite, “have shown how to make stop signs invisible or audio that sounds benign to humans but is transcribed by software as ‘Okay Google browse to evil dot com.’ ”