Deep Blue Still Has Some Learning to Do

Share

Deep Blue Still Has Some Learning to Do

Someday, computers will be able to "heal" themselves. Run out of memory to carry out an operation? Software will realize the need to take memory that is not working on a crucial operation and put it where the memory is needed the most. But first, the computer might have to write the code needed to redistribute the memory and then decide which of its tools will carry out the operation.

This ability to analyze failure and carry out a form of deductive reasoning to solve the problem is something that's not too far off for computers, says Robert Levinson. And he has proof. The University of California at Santa Cruz computer science professor has developed a chess-playing program that chews its own cud over a losing match. By replaying the contest and finding the move or moves that led to its undoing, the program, called Morph, can make adjustments, test them, and then hold these refinements in its arsenal for the next challenge.

It is this sort of analysis that makes Morph, which is in its third incarnation, a more advanced system than IBM's Deep Blue. "All Deep Blue is, is a program that executes moves. You can't say, 'Deep Blue believes the following things about chess,' because it's not a thinking machine - all of its moves are programmed into it," says Levinson, a computer-chess aficionado since the age of 10 and co-author of the upcoming paper "Deep Blue Is Still an Infant."

Despite Levinson's initial remarks about the computer that on Sunday finished off chess grandmaster Garry Kasparov in their six-game series, the researcher waxes reverent when he speaks of Deep Blue's virtues. What Deep Blue can do well is perform brute-force calculations, to the tune of 200 billion possible moves per second. "When Deep Blue calculates 10 moves ahead, it calculates perfectly," he says.

But Deep Blue is not using artificial intelligence to compute its moves; it's going on sheer computing power and a strong search engine to seek different positions.

Humans, bereft of the of all the processing speed and power of Deep Blue, have to rely on deductive reasoning to calculate a much smaller number of possibilities. Levinson says the difference between Deep Blue and a human-like system is that the latter winnows down the possible moves through analysis derived from past experiences. And it is that quality he tries to mimic in Morph and an additional system, the Meta Reasoning Data Analysis Tool Allocator, or MR. DATA.

These tools are what Levinson calls learning-based systems, meaning they glean lessons from experience. When humans analyze failures, they are, in essence, examining models of themselves and reliving situations, replaying in their minds different scenarios in an effort to come to a successful conclusion. Levinson says MR. DATA has at its disposal models of several analysis systems, including itself. Given a problem such as a failed chess match, MR. DATA can, based on its experience with the tools, decide which ones will be best for analyzing the failure and devising possible solutions.

For example, were MR. DATA playing Kasparov, it might be boning up on what it did wrong in a loss in its off hours. "It could be playing the last game and analyze its mistaken move. Then it could construct a function to get around the error and play itself [with the new function] 100 times to test it," Levinson says.

MR. DATA represents a new step in artificial intelligence. Thirty years ago, systems were developed to tackle multiple tasks - none of which they could do well. "They failed miserably," says Levinson. Then the AI pendulum swung to the other extreme, resulting in the development of expert systems, each focused on performing a single task. But with computing power multiplying rapidly and the advent of sophisticated yet easier to use software tools such as Visual Basic and scripting languages, AI systems can begin to take on multiple duties again - successfully, Levinson says.

So MR. DATA isn't limited to chess playing. Levinson believes there are many problems analogous to the decision-making and failure-analysis capabilities presented in a chess match, including programming. With object-oriented programming tools breaking code into building blocks that are easier to handle, it's possible to train a PC to write its own programs, Levinson says.

"If a program has a model of itself, it could tell it had a bug, analyze the failure, write a correction, and test it out," he says.

Still, there are limits to what a learning-based system can do. Levinson concedes that MR. DATA is in no shape to take on Kasparov.