EMI, BOTH ACCIDENTAL AND INTENTIONAL

ELECTROMAGNETIC INTERFERENCE (EMI) is as old as the earliest electrical devices. And as potentially dangerous as ever in today’s quest for autonomous vehicles. EMI may occur accidentally, an otherwise benign source of electromagnetic energy conflicting with another that happens to be within its range. Or, fundamentally evil, the interference may be intentional through the work of a hacker.

Neither of these is sci-fi. They’re topics at conferences on connectivity and artificial intelligence held around the world. The following EMI tidbits are gleaned from Science, the weekly magazine of the American Association for the Advancement of Science, and Automotive News, the authoritative weekly for the auto industry. And, from time to time, SimanaitisSays has offered a few thoughts on the topic.

Radio Buzzes and Other Remedied EFI. Within my own driving experience, I’ve had cars with ignition systems that caused a buzz in their radio reception, sort of an audible tachometer. In one case, I recall changing the type of spark plug wires solved the problem.

Generally, automakers have been able to engineer against such accidental EMI. They use electronic “clean rooms” where sources can be isolated and problems resolved.

A Cluttered Electromagnetic Environment. However, today’s—and particularly tomorrow’s—electromagnetic environment is a complex one. “Connected Cars Confront an Old Problem,” by Shiraz Ahmed, in Automotive News, July 23, 2018, offers an example: “The problem for connected vehicles was thrown into sharp relief when Intel’s Mobileye autonomous tech subsidiary began testing in Jerusalem in May, only to have a prototype autonomous vehicle run a red light during press demonstrations.”

Ahmed explains how the failure was a subtle one: “Mobileye CEO Amnon Shashua pointed to wireless signals from a local TV station’s cameras, saying they disrupted the traffic light’s transponder, which sends information to vehicles on signal changes.”

That is, the snafu had nothing to do with the car’s artificial intelligence not recognizing a red light. Rather, it was one step removed from this in misinterpreting stray EMI.

EMI sources will abound in a coming autonomous world. Image from Automotive News, July 21, 2018.

The problem won’t be solved with something as straightforward as different spark plug wires. Ahmed observes, “Federal Communication Commission regulators attempted to pre-empt the issue over a decade ago by setting aside the 5.9 GHz spectrum exclusively for car-safety applications via dedicated short-range communications. In recent years, advocates of competing cellular-based connected-car technologies and niche robotic firms have called for sharing the spectrum, causing uncertainty for engineers working to address radio interference.”

And Then There Are the Baddies.“Hackers Easily Fool Artificial Intelligence,” by Matthew Huston, in Science, July 20, 2018, offers a glimpse into EMI’s dark side. Notes Huston, “Last week, here [in Stockholm] at the International Conference on Machine Learning (ICML), a group of researchers described a turtle they had 3D printed. Most people would say it looks just like a turtle, but an artificial intelligence (AI) algorithm saw it differently. Most of the time, the AI thought the turtle looked like a rifle.”

This and the following image from Science, July 20, 2018.

Dawn Song is a computer scientist at the University of California, Berkeley. Huston quotes her as saying that such AI attacks are “a great lens through which we can understand what we know about machine learning.” Last year, Song and her colleagues put some stickers on a stop sign. It fooled a common type of image recognition AI into thinking it was a 45-mph speed limit sign.

White Box versus Black Box Attacks. Some AI attacks, the “white box” variety, use knowledge of the algorithm’s internal processing to nudge its outputs into errors. Specialists attempt to foil such attacks by building anti-hack subtleties into the algorithm.

A more subtle “black box” attack probes the AI from the outside, thus disrupting its necessary input and calculated output. I’d conjecture that the Mobileye/TV camera example is an unintentional bit of black-box hacking.

Making AI More Mathematical. One possible solution is endowing AI algorithms with verifiable, mathematical safeguards against false interpretation. Song describes part of the challenge in this: “There’s no mathematical definition of what a pedestrian is, so how can we prove that the self-driving car won’t run into a pedestrian?”

I remain skeptical that lawyers will ever let autonomous cars operate routinely in an environment with non-autonomous cars, nor in one that is not designed specifically to accommodate them. Too many unanswerable liability questions.