Both agents are state-of-the-art constructions,
incorporating the latest AI research in chess playing, natural-language understanding,
planning, etc. But because of the overwhelming combinatorics of chess, neither
they nor the fastest foreseeable computers would be able to search the entire
game tree to find out whether White has a forced win. Why then do they come to
such an odd conclusion about their own knowledge of the game? The chess scenario
is an anecdotal example of the way inaccurate cognitive models can lead to behavior
that is less than intelligent in artificial agents. In this case, the agents
model of belief is not correct. They make the assumption that an agent actually
knows all the consequences of his beliefs. S1 knows that chess is a finite game,
and thus reasons that, in principle, knowing the rules of chess is all that is
required to figure out whether White has a forced initial win. After learning
that S2 does indeed know the rules of chess, he comes to the erroneous conclusion
that S2 also knows this particular consequence of the rules. and S2 himself, reflecting
on his own knowledge in the same manner, arrives at the same conclusion, even
though in actual fact he could never carry out the computations necessary to demonstrate
it.
%U http://www.ai.sri.com/pubs/files/633.pdf