The show demands an encyclopaedic knowledge and an ability to understand complex verbal clues. Watson will face the two most successful players ever to appear on Jeopardy!. By proving that it can compete with elite human players, Watson gives us a glimpse of a future in which a new breed of smart software permeates our lives.

When IBM first suggested a Jeopardy! machine four years ago, many artificial intelligence researchers were doubtful the company would succeed. Computers are great at following clearly-defined rules, but Jeopardy! questions can involve puns and clues-within-clues, with topics ranging from pop culture to technology.

Jeopardy! clues are also unusual in that they are phrased as answers and contestants are required to supply the right question. For example, if the host says that "This cigar-smoking prime minister led Britain during the second world war", a contestant might reply "Who is Winston Churchill?".

Creating a set of rules that a machine could use to understand and answer the game's questions would be impossible, as the questions are too complex. Instead, IBM's team, lead by David Ferrucci based in Yorktown Heights, New York, gave their computer a few rules and a huge amount of memory and processing power.Huge databases

The team loaded 200 million pages of text from encyclopaedias, newspapers and other sources into Watson's memory. They also added custom-built stores of data on geographic facts and other types of information.

Watson starts by guessing the subject of a question. It knows, for example, that the word "this" often precedes the subject. So, if a question begins "This 19th-century novelist...", Watson primes itself to search for writers.

Once it has worked out what it is looking for, Watson searches its huge databases for the answer. Thanks to a processor array that is around 2000 times more powerful than a desktop computer, Watson is able to search the databases and come up thousands of possible answers in less than 3 seconds.

To decide which answer is most likely to be the right one, Watson uses 100 different tests to rate its confidence in each answer.

For example, if Watson thinks that "1961" is the answer to "President Obama was born in this year", it would search for "President Obama was born in 1961". If it sees matches in sources that it trusts, like encyclopaedias, Watson ups its confidence in that answer.

The tests also check for word-play, such as puns. Faced with a question about an "arresting landscape painter", Watson looks up meanings of "arresting" and checks for connections with the names of famous landscape painters. Linked answers – in this case, "Constable" – get a higher confidence score.Full of confidence

Once it has carried out these tests, Watson takes the highest-rated answer and, if the confidence level attached to it is high enough, tries to buzz in and answer the question.

None of the tests represents a major breakthrough in language processing, but by combining them all Watson has leaped ahead of rival question-and-answer systems. The outcome of the pre-recorded final, which will begin airing in the US tonight, is still a carefully guarded secret. Based on practice games in January, Watson is expected to win.

Whatever the outcome, the fact that a machine is able to compete against the most successful players ever to appear on Jeopardy! "is a remarkable achievement", says Boris Katz, an artificial intelligence researcher at the Massachusetts Institute of Technology.

Watson wasn't built just to play in game shows, however. The software could help with any task that involves a large number of text documents. For instance, if patient records and a database of medical literature are uploaded to its "brain", Watson would become "Dr Watson".

Dave Gondek, an IBM researcher, imagines a situation in which voice-recognition technology is used to relay a patient-doctor conversation to Watson. Watson's remarkable speed means that by the end of the conversation, it would have searched its databases of medical literature and would be able to suggest possible diagnoses, each with its own confidence level.

Intelligence services might also have a use for Watson. Members of the Watson team are part of a machine-reading research programme funded by the Defense Advanced Research Projects Agency (DARPA), a US government body based in Arlington, Virginia.Report overload

DARPA is interested in machine reading in part because intelligence analysts are faced with far more reports than they can synthesise, says Eric Nyberg, a computer scientist at Carnegie Mellon University in Pittsburgh who worked with the Watson team. Instead of having to study every report, analysts could use Watson to extract information on, say, how individuals discussed in the documents are linked.

Watson's potential also extends to everyday tasks. It is not currently connected to the internet, but if it were given an index of the web, it could take a question expressed in natural language and return a one-line answer. "Watson could replace Google for some kinds of searches," say Nyberg.

Like a human brain, a Watson search engine would continually be adding to its knowledge. The software could "learn" by searching for new documents to add to its database, so it would automatically update its database of search results as new facts emerge. Prem Natarajan of Raytheon BBN Technologies in Boston, Massachusetts – one of the companies involved in the DARPA project – compares this application to an oracle.

Thankfully, perhaps, Watson isn't fail-proof: it can find many simple questions baffling. This is in part because the software relies heavily on finding text that looks like the right answer to a question. As a result, it misses out on information that is too obvious to have been written down. Not only that, but Watson sometimes thinks that fictional characters are real, says Gondek. It once named the first woman in space as "Wonder Woman".

To get a grasp on such common sense issues, Watson will need to improve its understanding of the questions it is asked, rather than just searching for answers that look plausible. And that's a much more difficult challenge.

"Watson is an awesome machine," says Katz. "But sometimes it makes amazingly silly mistakes. That should tell researchers that we're not done. We've not even scratched the surface."

New Scientist reports, explores and interprets the results of human endeavour set in the context of society and culture, providing comprehensive coverage of science and technology news.