What the…Hal?

Well, at least the little psychopath was well mannered. The world was introduced to Hal (acronym for Heuristically programmed ALgorithmic Computer) in the blockbuster film 2001: A Space Odyssey. Hal was the onboard computer system that relied on artificial intelligence to run all of the systems of the spaceship Discovery 1. Oh, and Hal had other skills, including the ability to read lips. Hal used that talent to interpret a conversation between two of the crew members as a decision to kill him, and Hal then went on the offensive, killing four of the five astronauts on board, and giving a good effort to get the only survivor as well. In the end, man triumphs over machine as Hal is rendered hors de combat when his memory cells are removed.

I am reminded of the misadventures of Hal while reading this story “AI + humans = kick-ass cybersecurity.” The article reports on an interesting project from MIT that combines human intelligence and artificial intelligence to achieve some impressive results in detecting cyber attacks. The approach now detects 85 percent of attacks while reducing the number of “false positives” —non-threats mistakenly identified as threats—by a factor of five. That’s critical, for as the article explains: “In the world of cybersecurity, human-driven techniques typically rely on rules created by living experts and therefore miss any attacks that don’t match the rules. Machine-learning approaches, on the other hand, rely on anomaly detection, which tends to trigger false positives that create distrust of the system and still end up having to be investigated by humans.”

It’s fascinating to see that, like Hal, other manifestations of artificial intelligence are given to paranoia. Every anomalous behavior represents a threat. Futurists have been predicting that machine-learning will offer a far better answer to the cybersecurity challenge than currently exists, and many companies are employing or developing artificial intelligence systems. But the potential dangers associated with this approach should not be underestimated. We are and will be investing machines with considerable power, and these machines will not be invulnerable to error. Keeping the human element in the ascendant position in the process will be a difficult needle to thread.

The Global Challenges Foundation recently issued a report “12 Risks That Threaten Human Civilisation.” Nuclear war? Check. Global Pandemic? Check. Artificial Intelligence? Yes, indeed. Here’s a chilling conclusion: “Artificial Intelligence (AI) seems to be possessing huge potential to deliberately work towards extinction of the human race. Though, synthetic biology and nanotechnology along with AI could possibly be an answer to many existing problems however if used in the wrong way it could probably be the worst tool against humanity….”

Artificial intelligence will be an integral part of cyber defense going forward. Let’s hope the next gen does not learn to read lips.