Feds' goal: A tamer HAL 9000 self-learning computer

'It is impossible for programmers to anticipate every problematic or surprising situation'

Bob Unruh joined WND in 2006 after nearly three decades with the Associated Press, as well as several Upper Midwest newspapers, where he covered everything from legislative battles and sports to tornadoes and homicidal survivalists. He is also a photographer whose scenic work has been used commercially.

Many classic film buffs and computer geeks know many of the lines from the computer character HAL 9000 in “2001: A Space Odyessey” by heart.

“I’m sorry, Dave. I’m afraid I can’t do that,” the learning computer tells astronaut Dave Bowman in the movie, made almost 50 years ago, when Bowman’s intent is to access a function that would shut down HAL.

When Bowman asks, “What’s the problem?” HAL’s response is, “I think you know what the problem is just as well as I do.”

The fictional HAL 9000 was a computer that started learning and then decided it no longer had to follow the instructions man had programmed. The computer started making decisions based on what it wanted, not its programming.

Now the U.S. government is working on a project that, ideally, will result in a computer that can learn from its experiences and situations, but still remain within its programmed boundaries.

“Life is by definition unpredictable. It is impossible for programmers to anticipate every problematic or surprising situation that might arise, which means existing [Machine Learning] systems remain susceptible to failures as they encounter the irregularities and unpredictability of real-world circumstances,” said Hava Siegelmann of the Lifelong Learning Machines program run by the federal Defense Advanced Research Projects Agency.

“Today, if you want to extend an ML system’s ability to perform in a new kind of situation, you have to take the system out of service and retrain it with additional data sets relevant to that new situation. This approach is just not scalable.”

The agency in a recent announcement cited the advent of self-driving taxis, cell phones that “react appropriately to spoken requests” and computers that can defeat world-class chess champions.

“Artificial Intelligence (AI) is becoming part and parcel of the technological landscape – not only in the civilian and commercial worlds but also within the Defense Department, where AI is finding application in such arenas as cybersecurity and dynamic logistics planning,” the organization explained.

“But even the smartest of the current crop of AI systems can’t stack up against adaptive biological intelligence. These high-profile examples of AI all rely on clever programming and extensive training datasets – a framework referred to as Machine Learning (ML) – to accomplish seemingly intelligent tasks. Unless their programming or training sets have specifically accounted for a particular element, situation, or circumstance, these ML systems are stymied, unable to determine what to do.”

That means when you ask your cell phone for the weather report, the response has to have been programmed.

Ask your computer about restaurants, and only the results that previously were programmed show up.

“That’s a far cry from what even simple biological systems can do as they adapt to and learn from experience. And it’s light years short of how, say, human motorists build on experience as they encounter the dynamic vagaries of real-world driving – becoming ever more adept at handling never-before-encountered challenges on the road,” DARPA explained.

That brings up the L2M effort.

“The technical goal of L2M is to develop next-generation ML technologies that can learn from new situations and apply that learning to become better and more reliable, while remaining constrained within a predetermined set of limits that the system cannot override,” the agency explained.

“Such a capability for automatic and ongoing learning could, for example, help driverless vehicles become safer as they apply knowledge gained from previous experiences – including the accidents, blind spots, and vulnerabilities they encounter on roadways – to circumstances they weren’t specifically programmed or trained for.”

The path to such an end result, DARPA explained, is its L2M that “aims to develop fundamentally new ML mechanisms that will enable systems to learn from experience on the fly – much the way children and other biological systems do, using life as a training set.”

“The basic understanding of how to develop a machine that could truly improve from experience by gaining generalizable lessons from specific situations is still immature. The L2M program will provide a unique opportunity to build a community of computer scientists and biologists to explore these new mechanisms,” the experts said.

“Enabling a computer to learn even the simplest things from experience has been a longstanding but elusive goal,” said Siegelmann. “That’s because today’s computers are designed to run on prewritten programs incapable of adapting as they execute, a model that hasn’t changed since the British polymath Alan Turing developed the earliest computing machines in the 1930s. L2M calls for a new computing paradigm.”

The breakdown of the work involves two technical areas.

“The first aims to develop ML frameworks that can continuously apply the results of past experience and adapt ‘lessons learned’ to new data or situations. Simultaneously, it calls for the development of techniques for monitoring an ML system’s behavior, setting limits on the scope of its ability to adapt, and intervening in the system’s functions as needed.”

Secondly, there will be research on how living systems learn and whether the findings can be applied to machines.

“Life has had billions of years to develop approaches for learning from experience,” Siegelmann said. “There are almost certainly some secrets there that can be applied to machines so they can be not just computational tools to help us solve problems but responsive and adaptive collaborators.”