Google has published on the Google Research blog the search quality raters guidelines, contractors guidelines to evaluate Google’s search results, specifically for the Google Assistant and voice search results. It is similar to the web search quality guidelines, but it changes in that there is no screen to look at when evaluating such results; instead you are evaluating the voice responses from the Google Assistant.

Google explained, “The Google Assistant needs its own guidelines in place, as many of its interactions utilize what is called ‘eyes-free technology,’ when there is no screen as part of the experience.” Google has designed machine learning and algorithms to try to make the voice responses and “answers grammatical, fluent and concise.” Google said that they ask raters to make sure that answers are satisfactory across several dimensions:

Information Satisfaction: the content of the answer should meet the information needs of the user.

Length: when a displayed answer is too long, users can quickly scan it visually and locate the relevant information. For voice answers, that is not possible. It is much more important to ensure that we provide a helpful amount of information, hopefully not too much or too little. Some of our previous work is currently in use for identifying the most relevant fragments of answers.

Formulation: it is much easier to understand a badly formulated written answer than an ungrammatical spoken answer, so more care has to be placed in ensuring grammatical correctness.

Elocution: spoken answers must have proper pronunciation and prosody. Improvements in text-to-speech generation, such as WaveNet and Tacotron 2, are quickly reducing the gap with human performance.

The short, only seven-page, guidelines can be downloaded as a PDF over here.