Quest for the Holy Grail of General AI

A Maryland software startup has laid claim to what could be a major breakthrough in artificial intelligence, saying it had achieved the "holy grail" of Artificial General Intelligence.

Z Advanced Computing announced it had developed a general AI system that can process and identify 3-D objects without first being trained on every permutation of every possible view it could be shown of the object. In basic terms, it can think and learn more like a human than other machines.

This would represent a game-changing leap in AI, which to date has operated within fairly narrow parameters. Even the most impressive AI systems tend to work in specific areas, abiding by the rules that have been programmed into it. IBM’s Watson, for instance, has accomplished impressive feats in medicine, financial services, game shows and other areas.

But there are multiple Watsons, each tailored to the task at hand. That “specific AI” approach keeps each system in its lane. General AI would apply a machine’s intelligence across the board, allowing it to learn more on its own and apply its lessons to other areas without being trained in each one.

The team behind ZAC said its AI can recognize 3-D objects from any angle based on only a small number of training samples, instead of the tens of thousands (or hundreds of thousands) of samples required to train current deep neural networks.

“This is the first demonstration of General-AI techniques, which is fairly similar to the ways humans learn and recognize," ZAC’s Bijan Tadayon said in an statement.

ZAC isn’t claiming it’s all the way to AGI yet, noting its work with 3-D images is the “tip of the iceberg.” Nevertheless, “we have demonstrated a very complex task … [that’s] not at all possible with the use of deep neural networks or other specific-AI technologies,” Tadayon said, according to a report at Fanatical Futurist.

Thinking Alike

Even if ZAC’s breakthrough is the first example of general AI in action, it’s not alone in getting machines to learn on their own. Google’s DeepMind lab has reported steps toward general AI with its work in relational reasoning, an element of human in intelligence. And DeepMind’s AlphaGo Zero and the more generalized Alpha Zero have been able to teach themselves Go, chess or other games without being tutored by human players. (So, in keeping with the holy grail theme and where general AI stands now, DeepMind might be able to tauntingly claim, like the French in “Monty Python and the Holy Grail,” that, “We already got one!”)

IBM, meanwhile, continues to make strides in AI, even if it hasn’t yet planted a flag on the AGI hill. Most recently, the company’s Project Debater did pretty well in a demonstration this month against a couple of experienced debate champs, drawing on hundreds of millions of newspaper and academic journal articles to put together arguments on topics that were assigned on the spot — without prior warning — and delivering them in a natural-language voice.

As impressive as it was, however, the demonstration also showed some of the current limits of AI. The Debater tended to take sentences from the documents in its database and add a couple rhetorical flourishes, but didn’t tackle its subjects with the precision and nuance of a human, according to Futurism. In surveys after each debate, the audience said they found the Debater edifying but that the humans made their arguments better.

Team Game

Demonstrations are an effective way to highlight substantial progress, but people should also be wary of jumping to overblown conclusions about exactly what AI is currently capable of (like, for instance, the end of the human race). Often, the progress is more incremental than overpowering.

For example, Watson’s much ballyhooed match on “Jeopardy” in 2011 against the show’s two biggest previous winners was impressive, particularly with regard to information retrieval and natural language, but Watson was also set up to succeed. For starters, it was a pretty easy Jeopardy match. Your modestly knowledgeable narrator watched the episode and knew the question to just about every answer, so it’s likely that human players Ken Jennings and Brad Rutter knew them all.

Watson, however, was better at timing its response signal, so it got to answer more of the questions, and racked up more points for the win. It wasn’t “smarter” than Jennings or Rutter, it just had a quicker trigger finger — it was better at the kind of thing machines are always better at.

But the larger point was to demonstrate Watson’s ability as an effective instrument, which it has proved to be in a number of fields. IBM, in fact, said the point behind Debater isn’t to win arguments against humans but to show how AI can research, organize and present evidence that can help humans more quickly make better decisions.

Meanwhile, ZAC’s feat with general-AI could represent an important step toward the next generation of AI. For one thing, it seems to be just the kind of thing the Defense Department is looking for with its Project Maven, to automate analysis of full-motion video from surveillance drones. One of the problems DOD currently has with machine analysis of still images is the extensive training required before an AI machine can reliably identify specified objects. And DOD has consistently said that its goal for AI is in human-machine teams on which the parts complement each other.

Fully sentient robots are, for now, still a thing of the future. But AI systems are graduating to new levels.