Recently, the American Customer Satisfaction Index reported that the perceived quality of Google and other search platforms is on a downward trajectory. In particular, Google’s satisfaction score has declined two percentage points to 82 out of a possible high score of 100, according to the ACS Index.

Related to this trend, search advertising ROI is also declining. According to a report published recently by Analytic Partners, the return on investment from paid search dropped by more than 25% between 2010 and 2016.

In all likelihood, a falling ROI can be linked to lower satisfaction with search results. But let’s look at things a little more closely.

First of all, Google’s customer satisfaction score of 82 is actually better than the 77 score it had received as recently as 2015. In any case, attaining a score of 82 out of 100 isn’t too shabby in such customer satisfaction surveys.

Moreover, Google has been in the business of search for a solid two decades now – an eternity in the world of the Internet. Google has always had a laser-focus on optimizing the quality of its search results, seeing as how search is the biggest “golden egg” revenue-generating product the company has (by far).

Obviously, Google hasn’t been out there with a static product. Far from it: Google’s search algorithms have been steadily evolving to the degree that search results stand head-and-shoulder above where they were even five years ago. Back then, search queries typically resulted in generic results that weren’t nearly as well-matched to the actual intent of the searcher.

That sort of improvement is no accident.

But one thing has changed pretty dramatically – the types of devices consumers are using to conduct their searches. Just a few years back, chances are someone would be using a desktop or laptop computer where viewing SERPs containing 20 results was perfectly acceptable – and even desired for quick comparison purposes.

Today, a user is far more likely to be initiating a search query from a smartphone. In that environment, searchers don’t want 20 plausible results — they want one really good one.

You could say that “back then” it was a browsing environment, whereas today it’s a task environment, which creates a different mental framework within which people receive and view the results.

So, what we really have is a product – search – that has become increasingly better over the years, but the ground has shifted in terms of customer expectations.

Simply put, people are increasingly intolerant of results that are even a little off-base from the contextual intent of their search. And then it becomes easy to “blame the messenger” for coming up short – even if that messenger is actually doing a much better job than in the past.

It’s like so much else in one’s life and career: The reward for success is … a bar that’s set even higher.

Suddenly, the conflict between Google and the European Union countries regarding the censoring of search results has taken on even wider worldwide proportions.

This past week, the courts have upheld the French government’s data protection office (CNIL) order for Google to broaden the “right to be forgotten” by censoring search results worldwide — not just in Europe.

Google had appealed the initial CNIL ruling.

The CNIL rejected Google’s argument that a worldwide implementation of the European standard of censoring search results would mean that the Internet would be only as free as the “least free place.” (Think Belarus or Syria.) But in its ruling, the CNIL noted that a country-by-country implementation of the “right to be forgotten” would mean that the right could be circumvented too easily.

While it’s true that more than 95% of Google searches in Europe are performed via European versions of the company’s search engine tool, such as google.fr and google.co.uk, identical searches can be performed easily using google.com, meaning that anyone trying to find “forgotten” information on an individual can do so easily, irrespective of the European standard.

As I blogged back in May, The European Court of Justice’s 2014 ruling meant that Google is required to allow residents of EU countries to delete links to certain harmful or embarrassing information that may appear about themselves in Google search results.

The directive has turned into a real thicket of challenges for Google.

What the definition of “harmed and embarrassing” is is somewhat amorphous, as the court’s ruling encompassed links to information ranging from excessive and harmful on one end of the scale all the way down to links that are merely outdated, inadequate or irrelevant.

Since the ruling went into effect, Google has had to field requests to remove more than one million links from European search results.

Link removal isn’t accomplished via some sort of “bot” procedure. Instead, each request is considered on a case-by-case basis by a panel of arbiters made up of attorneys, paralegals and search engineers.

Approximately one-third of the links in question have been removed following panel review, while about half have remained in search results.

The rest – the real toughies – are still under review, and their status as yet unresolved.

Obviously, for this activity to spread from covering just European search engines to include potentially the entire world isn’t what Google has in mind at all. (If Google could have its way, doubtless the whole notion of “the right to be forgotten” would be off the table.)

But the situation is getting pretty hot now. French authorities imposed a 15-day compliance deadline, after which Google could be fined nearly US$350,000.

Of course, the amount of that penalty pales in comparison to the cost Google would incur to comply with the directive.

But that fine is just the opening salvo; there’s no telling what the full degree of financial penalties might turn out to be for continued non-compliance.

I wrote before that it’s difficult to know where the world will eventually end up on the issue of censoring search engine results. Today, I don’t think we’re anywhere closer to knowing.

In the Western world, humans have been viewing and processing information in the same basic ways for hundreds of years. It’s a subconscious process that entails expending the most judicious use of time and effort to forage for information.

Because of how we Westerners read and write – left-to-right and top-to-bottom – the way we’ve evolved searching for information mirrors the same sort of behavior.

And today we have eye-scanning research and mouse click studies that prove the point.

In conducting online searches, where we land on information is known variously as the “area of greatest promise,” the “golden triangle,” or the “F-scan diagram.”

However you wish to name it, it generally looks like this on a Google search engine results page (SERP):

It’s easy to see how the “area of greatest promise” exists. We generally look for information by scanning down the beginning of the first listings on a page, and then continue viewing to the right if something seems to be a good match for our information needs, ultimately resulting in a clickthrough if our suspicions are correct.

Heat maps also show that quick judgments of information relevance on a SERP occur within this same “F-scan” zone; if we determine nothing is particularly relevant, we’re off to do a different keyword search.

This is why it’s so important for websites to appear within the top 5-7 organic listings on a SERP – or within the top 1-3 paid search results in the right-hand column of Google’s SERP.

In recent years, Google and other search engines have been offering enhancements to the traditional SERP, ranging from showing images across the top of the page to presenting geographic information, including maps.

To what degree is this changing the “conditioning” of people who are seeking out information today compared to before?

What new eye-tracking and heat maps are showing is that we’re evolving to looking at “chunks” of information first for clues as to the promising areas of results. But then within those areas, we revert to the same “F-scan” behaviors.

Here’s one example:

And there’s more: The same eye-tracking and heat map studies are showing that this two-step process is actually more time-efficient than before.

We’re covering more of the page (at least on the first scan), but are also able to zero in on the most promising information bits on the page. Once we find them, we’re quicker to click on the result, too.

So while Google and other search engines may be “conditioning” us to change time-honored information-viewing habits, it’s just as much that we’re “conditioning” Google to organize their SERPs in ways that are easiest and most beneficial to us in the way be seek out and find relevant information.

Bottom line, it’s continuity among the chaos. And it proves yet again that the same “prime positioning” on the page favored for decades by advertisers and users alike – above the fold and to the left – still holds true today.

When it comes to the evolution of online search, as one wag put it, “If you drop your pencil, you miss a week.”

It does seem that significant new developments in search crop up almost monthly – each one having the potential to up-end the tactics and techniques that harried companies attempt to put in place to keep up with the latest methods to target and influence customers. It’s simply not possible to bury your head in the sand, even if you wanted to.

Two of the newest developments in search include the introduction of a beta version of the new Blekko search engine with its built-in focus on SEO analytics — I’ll save that topic for a future blog post — along with a joint press conference held last week by Facebook and Microsoft where they announced new functionalities to the Bing search engine. More specifically, Bing will now be displaying search results based on the experiences and preferences of people’s Facebook friends.

What makes the Bing/Facebook development particularly intriguing is that it adds a dimension to search that is genuinely new and different. Up until now, every consumer had his or her “search engine of choice” based on any number of reasons or preferences. But generally speaking, that preference wouldn’t be based on the content of the search results. That’s because the ability for search engines to deliver truly unique search results has been very difficult because they’ve all been based on essentially the same search algorithms.

[To prove the point, run the same search term on Yahoo and Google, and you’ll likely see natural search results are pretty similar one to another. There might be a different mix of image and video results, but generally speaking, the results are based on the same “crawling” capabilities of search bots.]

The Bing/Facebook deal changes the paradigm in that new information heretofore residing behind Facebook’s wall will now be visible to selected searchers.

The implications of this are pretty interesting to contemplate. It’s one thing for people to read reviews or ratings written by total strangers about a restaurant or store on a site like Yelp. But now, if someone sees “likes,” ratings or comments from their Facebook friends, those will presumably carry more weight. With this new font of information, as time goes on the number of products, brands and services that people will be rating will surely rise.

The implications are potentially enormous. Brands like Zappos have grown in popularity, and in consumer loyalty, because of their “authenticity.” The new Bing/Facebook module will provide ways for smaller brands to engender similar fierce loyalty on a smaller scale … without having to make the same huge brand-building commitment.

Of course, there’s a flip side to this. A company’s product had better be good … or else all of those hoped-for positive ratings and reviews could turn out to be the exact opposite!

More than 460 million searches are performed every day on the Internet by U.S. consumers. A new report titled 2010 SERP Insights Study from Performics, an arm of Publicis Groupe, gives us interesting clues as to what’s happening in the world of web search these days.

The survey, fielded by Lancaster, PA-based ROI Research, queried 500 U.S. consumers who use a search engine at least once per week, found that people who search the Internet regularly are a persistent lot.

Nine out of ten respondents reported that they will modify their search and try again if they aren’t successful in their quest. Nearly as many will try an alternate search engine if they don’t succeed.

As for search engine preference, despite earnest efforts recently to knock Google down a notch or two, it remains fully ensconced on the top perch; three-fourths of the respondents in this survey identify Google as their primary search engine. Moreover, Google users are less likely to stray from their primary search engine and try elsewhere.

But interestingly, Google is the “search engine of choice” for seasoned searchers more than it is for newbies. The Performics study found that Google is the leading search engine for only ~57% of novice users, whereas Yahoo does much better among novices than regular users (~36% versus ~18% overall).

What about Bing? It’s continuing to look pretty weak across the board, with only ~7% preferring Bing.

The Performics 2010 study gives us a clear indication as to what searchers are typically seeking when they use search engines:

 Find a specific manufacturer or product web site: ~83%
 Gather information before making a purchase online: ~80%
 Find the best price for a product or service: ~78%
 Learn more about a product or service after seeing an ad elsewhere: ~78%
 Gather information before purchasing in-store or via a catalog: ~76%
 Find a location for purchasing a produce offline: ~74%
 Find coupons, specials, or sales: ~63%

As for what types of listings are more likely to attract clickthroughs, brand visibility on the search engine results page turns out to be more important than you might think. Here’s how respondents rated the likelihood to click on a search result:

 … If it includes the exact words searched for: ~88%
 … If it includes an image: ~53%
 … If the brand appears multiple times on the SERP: ~48%
 … If it includes a video: ~26%

The takeaway message here: Spend more energy on achieving multiple high SERP rankings than in creating catchy video content!

And what about paid or sponsored links – the program that’s contributing so much to Google’s sky-high stock price? As more searchers come to understand the difference between paid and “natural” search rankings … fewer are drawn to them. While over 90% of the respondents in this research study reported that they have ever clicked on paid sponsored listings, only about one in five of them do so on a frequent basis.

Most of the people I know who are eager consumers of news tend to spend far more time on the Internet than they do offline with their nose in the newspaper.

So I was surprised to read the results of a new study published by Gather, Inc., a Boston-based online media company, which found that self-described “news junkies” are more likely to rely on traditional media sources like television, newspapers and radio than online ones.

In fact, the survey, which was fielded in March 2010 and queried the news consumption habits of some 1,450 respondents representing a cross-section of age and income demographics, found that more than half of the “news hounds” cited newspapers as their primary source of news.

By comparison, younger respondents (below age 25) are far more likely to utilize the Internet for reading news (~70% do so).

Another interesting finding in the Gather study – though not terribly surprising – is that younger respondents describe themselves as “interest-based,” meaning that apart from breaking news, they focus only on stories of interest to them. This pick-and-choose “cafeteria-style” approach to news consumption may partially explain the great gaps in knowledge that the “over 40” population segment perceives in the younger generations (those observations being reported with accompanying grunts of displeasure, no doubt).

As for sharing news online, there are distinct differences in the behavior of older versus younger respondents. Two findings are telling:

 More than two-thirds of respondents age 45 and older share news items with other primarily through e-mail communiqués.

 ~55% of respondents under age 45 share news primarily through social networking.

Also, more than 80% of the respondents in Gather’s study revealed that they have personally posted online comments about news stories. This suggests that people have now become more “active” in the news by weighing in with their own opinions, rather than just passively reading the stories. This is an interesting development that may be rendering the 90-9-1 principle moot.

[For those who are unfamiliar with the 90-9-1 rule, it contends that for every 100 people interacting with online content, one creates the content … nine edit, modify or comment on that content … and the remaining 90 passively read/review the content without undertaking any further action. It’s long been a tenet in discussions about online behavior.]

What types of news stories are most likely to generate reader comments? Well, politics and world events are right up there, but local news stories are also a pretty important source for comments:

And what about the propensity for news seekers to use search engines to find multiple perspectives on a news story? More than one-third of respondents report that they “click on multiple [search engine] results to get a variety of perspectives,” while less than half of that number click on just the first one or two search result entries.

And why wouldn’t people hunt around more? In today’s world, it’s possible to find all sorts of perspectives and “slants” on a news story, whereas just a few years ago, you’d have to be content with the same AP or UPI wire story that you’d find republished in dozens of papers — often word-for-word.

In the B-to-B world, marketers are sometimes disappointed with the open rates for the e-newsletters they deploy to their customers and prospects. While some are opened by a large proportion of recipients, it’s common experience for e-newsletter open rates to hover around 20%-25%.

Does this mean that e-newsletters are a poor substitute for B-to-B print media? Unfortunately, it’s difficult to know how these results compare. After all, just because trade magazines are delivered to recipients doesn’t mean that they’re ever read.

It would be nice to compare B-to-B reader dynamics between print and online media, but with quantifiable statistics available for only one side of the equation, that’s pretty difficult.

However, GlobalSpec, the technology services company that operates a vertical search engine of engineering and industrial products, is able to provide us with a few additional clues. It has just published the results of its 2010 Economic Outlook Survey, which queried more than 2,000 U.S. technical, engineering, manufacturing and industrial professionals on a variety of business topics.

As part of the GlobalSpec survey, respondents were asked about their e-newsletter reading habits. And it turns out that more than half of the respondents (~55%) reported that they read work-related e-newsletters daily or several times a week.

Another 30% of respondents reported that they read e-newsletters once a week or several times per month. That leaves only 15% reporting that they rarely or never read e-newsletters.

What’s more, the readership of e-newsletters appears in increasing. In GlobalShop’s 2009 survey, only ~40% of respondents reported reading e-news daily or several times per week. So the increase in activity over just the past year is substantial.

The takeaway news is that more people in the B-to-B segment are “engaged” with e-newsletters than ever before. Whether you’re achieving above or below the 20%-25% open rate threshold is likely a function of the quality of your content … along with how good you’re doing with targeting the right names in your database.