IBM’s DeepQA project’s computer, Watson, will compete on the Jeopardy! game show on Feb. 14-16, against two human champions, testing a computer system’s ability to answer natural language questions. In addition to raising interest in engineering education, advances in artificial intelligence and natural language programming will continue to improve industrial computers, human-machine interfaces, robotics, and automated guided vehicles.

Paul F. Grayson

02/07/2011

A computer system that can directly and precisely answer natural language questions over an open and broad range of knowledge has been envisioned by scientists and writers since the advent of computers themselves, according to IBM; consider, for example, the "Computer" in Star Trek. (See links to related articles, below.) Taken to its ultimate form, broad and accurate open-domain question answering may represent a crowning achievement in the field of artificial intelligence (AI).

Advances in artificial intelligence and natural language understanding will continue to improve industrial computers, human-machine interfaces, robotics and automated guided vehicles (AGVs), improving manufacturing, controls, automation, and instrumentation applications. High-visibility events like a Jeopary! competition can serve to spark interest in science, technology, engineering, and math (STEM) topics among young people. AI advances could have advance capabilities of AGVs, including swarm robotics (see more below), AI, human-machine interfaces, and other applications discussed in this AIMing for Automated Vehicles blog.

While current computers can store and deliver a wealth of digital content created by humans, they are unable to operate over it in human terms, explained IBM, about its DeepQA project. The quest for building a computer system that answer open-domain questions is driven by a broader vision that sees computers operating more effectively in human terms rather than strictly computer terms, the company said. They should function in ways that understand complex information requirements, as people would express them, for example, in natural language questions or interactive dialogs. IBM said computers should deliver precise, meaningful responses, and synthesize, integrate, and rapidly reason over the breadth of human knowledge as it is most rapidly and naturally produced -- in natural language text.

“Possibilities for enriching our global community and accelerating the pace at which we can exploit and expand human knowledge, solve problems and help each other in ways never before imagined,” said IBM, “rests on our ability to bring information technology out of the era of operating in computer terms and into the era of operating in human terms.”

IBM said its DeepQA project shapes a grand challenge in computers science that aims to illustrate how the wide and growing accessibility of natural language content and the integration and advancement of natural language processing, information retrieval, machine learning, knowledge representation and reasoning, and massively parallel computation can drive open-domain automatic question-answering technology to a point where it clearly and consistently rivals the best human performance.

First stop: Jeopardy! Challenge, Feb. 14-16, 2011

Over the past four years, a team of IBM scientists have set out to accomplish a grand challenge – build a computing system that rivals a human’s ability to answer questions posed in natural language with speed, accuracy, and confidence, the company said. The IBM computing system named Watson, has plans to compete on the Jeopardy! TV game show on Feb. 14, 15, 16, 2011, against the show’s two most successful and celebrated contestants -- Ken Jennings and Brad Rutter.

Jeopardy! provides the ultimate challenge because the game’s clues involve analyzing subtle meaning, irony, riddles, and other complexities in which humans excel and computers, traditionally, do not.

Watson's ability to understand the meaning and context of human language, and rapidly process information to find precise answers to complex questions, holds enormous potential to transform how computers help people accomplish tasks in business and their personal lives, IBM said, because computers like Watson will enable people to rapidly find specific answers to complex questions. Applications include healthcare, for accurately diagnosing patients, to improve online self-service help desks, to provide tourists and citizens with specific information regarding cities, prompt customer support via phone, and much more, IBM said.

Human vs. machine contest may raise interest in engineering

Designing a human vs. machine contest should square off the competitors in a way that allows the audience to transparently size them up pound-for-pound, IBM noted, since everything a human brings to the table is right there in front of the audience — no Ethernet, wireless, 3G, or other lifelines allowed — the same should be true for the computer. This is why Jeopardy! and IBM intend to put Watson on the stage in plain sight, with no connections to other computers or data resources of any kind — what you see is what you get.

The computer will receive the clue electronically, precisely when the human players see it, and after that they're off. Likely comparisons include how much information Watson can store in its memory banks versus what a human can store in its wetware, how much parallelization the human brain is capable of versus what Watson can do, the number of petaflops, and others, IBM said. All of these will be fascinating to explore if the competition (two matches over three days) proves worthy.

One of the interesting comparisons will be the time it takes for the computer to understand the natural language clues well enough to accurately assess if it knows the answer, said IBM; can the computer do this in time to competitively buzz-in? Or if the computer will deliver answer by the time it must provide one. Humans on Jeopardy! excel in this ability and appear to be able to accurately know if they know [and can retrieve and articulate] the right answer very quickly after seeing the clue.

While computers have demonstrated that they can quickly recall documents based on pre-indexed keywords, knowing that a term from the potentially 1000's of returned results correctly answers the question is a whole other ball game, IBM said, requiring on-the-fly, deep analysis of large volumes of language and the production of accurate probabilities that a term or combination of terms is the right answer — all in time to buzz and respond. Not all clues are created equal, of course. Watson will not be able to answer all the clues with its stored content, but it still must determine whether or not it knows the right answer.

If it operated on one CPU, it could take Watson two hours to answer a single question, estimated IBM, which is a far cry from the three seconds it takes for a successful Jeopardy! competitor. For Watson to achieve the speed of the human brain in delivering the correct answer requires thousands of POWER7 computing cores working in a massively parallel system, IBM said.

This promises to be an interesting contest indeed. How can artificial intelligence like this help automated guided vehicles, advance young peoples' interest in engineering, and make manufacturing more efficient? Leave a comment below.

Swarm robotics: Debugged naturally for 120 million years - It seems odd to say that software for swarm robotics has been debugged naturally for 120 million years, but that information and 24 robots smaller than a toaster helped James McLurkin, MIT roboticist, explain three globally important things engineers should do.

Artificial Intelligence ...Within manufacturing - Perhaps we don't hear much about artificial intelligence (AI) methods used within today's technologies because it's slightly unnerving when computers emulate human thinking. Yet we, and computers themselves, continue to improve the way AI works quietly in the background to optimize, reduce process costs, and improve timing and product quality.