Perhaps Google should change the wording of its "Guess it for me" button from "I'm feeling lucky" to "I'm feeling psychic". The search giant has unveiled a new feature that will predict which page a user will choose from a list of search results and begin fetching its data – even before they have clicked on the link.

Designed to shave between 2 to 5 seconds from the overall search query time, "Instant Pages" builds on Google Instant, introduced in September 2010, which produces a page of search results before the user has finished typing their query or pressed the entry key.

Google fellow and search scientist Amit Singhal introduced the new features at Google's Inside Search event in Mountain View, California.

Instant Pages employs the user's Google search history together with the relevance of each search result, and around 200 other algorithmic factors, to determine which link they are most likely to click on, and begins contacting the server to load that page in the background as soon as the search results appear.

Singhal said that on average, users spend 9 seconds entering a query and 15 seconds sifting through results, while Google's processing between the two only takes 1 second.

"We're obsessed with speed," said Singhal. "We call speed the killer app. None of us have enough time, and last year's Google Instant was one of the biggest improvements we've made in getting information to users quicker."

Singhal also announced that Google's voice search, which currently allows mobile phone users to speak their search query, will be extended to desktop searches. The feature will be accessible for Chrome and Firefox users later this week, via a small microphone symbol beside the Google search box. Initially the feature will be in English only, but other languages will be added later.

A new image search feature will also invite users to upload, drag and drop, paste a URL or use a browser extension to provide a reference to an image. Google will then identify the image by comparing it to its database and publishing the subject, location and details.

Singhal said however that facial recognition was not part of the image search process and that all images shared during searches, including those that are uploaded, are anonymised, not publicly accessible and are treated in the same way as text search results.

"If you uploaded a picture of John Lennon, it might find other similar pictures of John Lennon, but it wouldn't know it was John Lennon," he said. "It's an engine for analysing colour, lines and texture, not for facial recognition."