Abstract

This paper compares the use of hand translated and automatically translated documents in a relevance feedback experiment with cross linguistic information retrieval. The documents are translated by hand, and automatically, and a range of subjects is used. Subjects report that the automatic translations are poor and that they do not understand them. Despite this, the use of automatic translation from manual translation leads to indistinguishable results with relevance feedback. That is the subjects correctly report the documents are relevant or not with equal probability whether the documents are hand or machine translated, showing the immediate benefit of current AI systems.