Chris Sherman, points to a new study that shows that the various generic web search engines (the big 4), have even less overlap in their results than in previous studies and that’s not much overlap. You can find his posting here.
Just how unique are the results on each engine? On average:
73.9% of Ask Jeeves first page results were unique to Ask Jeeves
71.2% of Yahoo first page results were unique to Yahoo
70.8% of MSN search first page results were unique to MSN search
66.4% of Google first page results were unique to Google
Hmmmm.
The study looked at results listings for more than 485,000 first page search results. First page results have two key qualities that are important. If I remember my old studies something like 98% of ‘ordinary’ searchers do not go past the third page of results and 95% don’t get past the first. Also, the first page is the pot of gold at the end of the rainbow for search engine optimization (SEO) consultants – those folks who attempt to ensure that their clients’ pages (not just their ads or sponsored links) show up on the first page of hits by using a wide range of techniques and strategies.
The study also found that:
84.9% of total results are unique to one engine
11.4% of total results were shared by any two engines
2.6% of total results were shared by any three engines
1.1% of total results were shared by any four engines
It’s worth a quick read and the questions I would ask about our library strategies would be:
1. We offer many databases for searching inside the library’s walls and many for virtual access through our websites. I think that we can safely assume that the ‘quality’ information in our licensed resources has even less overlap with the public web content acessible through searches.
2. I think we can also assume that few hits in our licensed resources are being manipulated extensively by marketers and SEO experts.
3. Many of our library websites choose to offer our users a link to one or more of the popular search engines. With such little overlap in the search results (which could be driven by the sorting or search algorithm or by the web harvesting differences or even by the timing of the scrapes for the search index)should we be preferring a metasearch engine like Dogpile or building our own using federated searching technologies and OpenURL resolvers?
4. Can we get better service delivered to our users by combining OPAC results seamlessly into web searches? Our experience at SirsiDynix is that OPAC use goes up exponentially when users ‘trip’ over the results in a federated search instead of having to ‘remember’ to use the rich OPAC, usually a library’s most vauable asset when meaasured by investment over time.
There are a lot of questions here and the answers may be quite different for different types of libraries and commnunities. It’s also interesting though. You can review stuff about Sirsi SingleSearch or Sirsi Resolver on our website.
Stephen

The numbers as reported do not clear the smell test. For example, how can we explain that google, which we would expect to have the largest corpus has *less* unique results than jeeves, which one would expect to be smaller, and hence mostly a subset of google??
Also, it is unclear what the numbers mean. For example, they claim that “84.9% of total [first page] results are unique to one engine”. If what they mean is that do not appear in the entire result set at all of others then this does not seem possible unless the coverage of Jeeves is below 10% of the web.
I wager that the numbers are being misinterpreted in the summary report. Is there a link to the actual study from the University of Pittsburgh?

I have to agree with Alex, I wonder about these numbers. Remember that nifty little search page at jux2.com? It did a great job of showing search enging overlap, or lack thereof. It was great for illustrating just what overlaps and what doesn’t. Search Engine Watch has said they took the site down and recommends some alternative engine overlap evaluation tools.http://blog.searchenginewatch.com/blog/050531-113318

You’re both right perhaps. However, the earlier studies you mention are not measuring anything close to what this attempts to measure. It’s not discussing larger issues of search engine overlap as previous studies have done – it’s measuring the overlap on just the first page. Since that’s where the vast majority of people stop it seems to me a better indicator of the the differences in the search engines – especially where optimization is in play. Measuring overlap of hits that are never seen seems a bit of a tree falling the forest, eh?

Subscribe

About The Author

Stephen Abram is a librarian and principal with Lighthouse Consulting Inc., and executive director of the Federation of Ontario Public Libraries. He blogs on library strategies for direction, marketing, technology and user alignment.