There are a growing number of reasons why more marketers these days are referring to the largest social media platform as “Fakebook.”

Back last year, it came to light that Facebook’s video view volumes were being significantly overstated – and the outcry was big enough that the famously tightly controlled social platform finally agreed to submit its metrics reporting to outside oversight.

To be sure, that decision was “helped along” by certain big brands threatening to significantly cut back their Facebook advertising or cease it altogether.

Now comes another interesting wrinkle. According to Facebook’s statistics, the social network claims it can reach millions of Americans across several important age demographics, as follows:

18-24 year-olds: ~41 million people

25-34 year-olds: ~60 million people

35-49 year-olds: ~61 million people

There’s one slight problem with these stats: U.S. Census Bureau data indicates that the total number of people living in the United States falling in the 18-49 age grouping is 137 million.

That’s a substantially lower figure than the 162 million people counted by Facebook – 25 million (18%) smaller, to be precise.

What could be the reason(s) for the overcount? As reported by Business Insider journalist Alex Heath, a Facebook spokesperson has attributed the “over-counting” to foreign tourists engaging with Facebook’s platform while they’re in the United States.

That seems like a pretty lame explanation – particularly since U.S. tourism outside the country is a reciprocal activity that likely cancels out foreign tourism.

There’s also the fact that there are multiple Facebook accounts maintained by some people. But it stretches credulity to think that multiple accounts explain more than a small portion of the differential.

Facebook rightly points out that its audience reach stats are designed to estimate how many people in a given geographic area are eligible to see an ad that a business might choose to run, and that this projected reach has no bearing on the actual delivery and billing of ads in a campaign.

In other words, the advertising would be reaching “real” people in any case.

Still, such discrepancies aren’t good to have in an environment where many marketers already believe that social media advertising promises more than it actually delivers. After all, “reality check” information like this is just a click away in cyberspace …

Google’s trying to not have its local search initiative devolve into charges and counter-charges of “fake news” à la the most recent U.S. presidential election campaign – but is it trying hard enough?

It’s becoming harder for the reviews that show up on Google’s local search function to be considered anything other than “suspect.”

The latest salvo comes from search expert and author Mike Blumenthal, whose recent blog posts on the subject question Google’s willingness to level with its customers.

Mr. Blumenthal could be considered one of the premiere experts on local search, and he’s been studying the phenomenon of fake information online for nearly a decade.

The gist of Blumenthal’s argument is that Google isn’t taking sufficient action to clean up fake reviews (and related service industry and affiliate spam) that appear on Google Maps search results, which is one of the most important utilities for local businesses and their customers.

Not only that, but Blumenthal also contends that Google is publishing reports which represent “weak research” that “misleads the public” about the extent of the fake reviews problem.

Mike Blumenthal

Google contends that the problem isn’t a large one. Blumenthal feels differently – in fact, he claims the problem as growing worse, not getting better.

From this exercise, he sees a pattern of fake reviews being written for overlapping businesses, and that somehow these telltale signs have been missed by Google’s algorithms.

A case in point: three “reviewers” — “Charlz Alexon,” “Ginger Karime” and “Jen Mathieu” — have all “reviewed” three very different businesses in completely different areas of the United States: Bedoy Brothers Lawn & Maintenance (Nevada), Texas Car Mechanics (Texas), and The Joint Chiropractic (Arizona, California, Colorado, Florida, Minnesota, North Carolina).

They’re all 5-star reviews, of course.

It doesn’t take a genius to figure out that “Charlz Alexon,” “Ginger Karime” and “Jen Mathieu” won’t be found in the local telephone directories where these businesses are located. That’s because they’re figments of some spammer-for-hire’s imagination.

The question is, why doesn’t Google develop procedures to figure out the same obvious answers Blumenthal can see plain as day?

And the follow-up question: How soon will Google get serious about banning reviewers who post fake reviews on local search results? (And not just targeting the “usual suspect” types of businesses, but also professional sites such as physicians and attorneys.)

“If their advanced verification [technology] is what it takes to solve the problem, then stop testing it and start using it,” Blumenthal concludes.

To my mind, it would be in Google’s own interest to get to the bottom of these nefarious practices. If the general public comes to view reviews as “fake, faux and phony,” that’s just one step before ceasing to use local search results at all – which would hurt Google in the pocketbook.

Microsoft Bing has just released stats chronicling its efforts to do its part to keep the Internet a safe space. Its 2015 statistics are nothing short of breathtaking.

Bing did its part by rejecting a total of 250 million ad impressions … banning ~150,000 advertisements … and blocking around 50,000 websites outright.

It didn’t stop there. Bing also reports that it blocked more than 3 million pages and 30 million ads due to spam and misleading content.

What were some of the reasons behind the blocking? Here are a few clues as to where Bing’s efforts were strongest (although I don’t doubt that there are some others that Bing is keeping closer to its vest so as not to raise any alarms):

Bing doesn’t say exactly how it identifies such a ginormous amount of fraudulent or otherwise nefarious advertising, except to report that the company has improved its handling of many aspects based on clues ranging from toll-free numbers analysis to dead links analysis.

According to Neha Garg, a program manager of ad quality at Bing:

“There have even been times our machine learning algorithms have flagged accounts that look innocent at first glance … but on close examination we find malicious intent. The back-end machinery runs 24/7 and used hundreds of attributes to look for patterns which help spot suspicious ads among billions of genuine ones.”

We’re thankful to Bing and Google for all that they do to control the incidence of advertising that carries malicious malware that could potentially cause many other problems above and beyond the mere “irritation factor.”

It had to happen: After years of publications uploading native advertising content that’s barely labeled as such, the Federal Trade Commission has handed down new guidelines that leave very little wiggle room in what constitutes proper labeling of paid advertising material.

What it boils down to is the stipulation that any sponsored content must be clearly labeled as advertising – using wording that the vast majority of readers will understand instantly.

Here’s how the FTC guidelines describe it:

“Terms likely to be understood include ‘Ad,’ ‘Advertisement,’ ‘Paid Advertisement,’ ‘Sponsored Advertising Content,’ or some variation thereof. Advertisers should not use terms such as ‘Promoted’ or “Promoted Stories,’ which in this context are, at best, ambiguous and potentially could mislead consumers that advertising content is endorsed by a publisher site.”

Another key provision is warning against advertising content mimicking the look and feel of surrounding editorial content – things like the layout characteristics, headline design treatment, the use of fonts and photography.

And here’s another kicker: the FTC lumps offending advertisers in the same pile as the people who create the materials, in that its policy statement doesn’t apply just to advertisers. So ad agencies, MarComm companies and graphic designers, beware.

Quoting again from the FTC document:

“In appropriate circumstances the FTC has taken action against other parties who helped create deceptive advertising content – for example, ad agencies and operators of affiliate advertising networks. Everyone who participates directly or indirectly in creating or presenting native ads should make sure that ads don’t mislead consumers about their commercial nature.

“Marketers who use native advertising have a particular interest in ensuring that anyone participating in the promotion of their products is familiar with the basic truth-in-advertising principle that an ad should be identifiable as an ad to consumers.”

Of course, these new guidelines are only going to make it harder for advertisers – and publishers – to be able to utilize advertising techniques that have, up to now, been far more effective than online display advertising.

Predictably, we’re hearing mealy-mouthed statements from the industry in response. A spokesperson for the Interactive Advertising Bureau had this to say:

“While guidance serves great benefit to the industry, it must also be technically feasible, creatively relevant, and not stifle innovation. To that end, we have reservations about some elements of the Commission’s guidance.”

What bothers the Interactive Advertising Bureau in particular is the “plain language” provisions in the FTC’s guidelines, which IAB considers “overly descriptive.”

Translation: there’s concern that publishers can no longer label advertising using such euphemisms as “partner content” or “promoted post.”

Others seem less concerned, however. Sites such as Mashable and Huffington Post appear to be onboard with the new guidelines.

Besides, as one spokesperson said, “When the FTC issues guidelines, you’re better off when you follow them than when you don’t.”

Too many business-to-business websites remain the “poor stepchildren” of the online world even after all these years.

So much attention is devoted to all the great ways retailers and other companies in consumer markets are delighting their customers online.

And it stands to reason: Those sites are often intrinsically more interesting to focus on and talk about.

Plus, the companies that run those sites go the extra mile to attract and engage their viewers. After all, consumers can easily click away to another online resource that offers a more compelling and satisfying experience.

By comparison, buyers in the B-to-B sphere often have to slog through some pretty awful website navigation and content to find what they’re seeking. But because their mission is bigger than merely viewing a website for the fun of it, they’ll put up with the substandard online experience anyway.

But this isn’t to say that people are particularly happy about it.

Through my company’s longstanding involvement with the B-to-B marketing world, I’ve encountered plenty of the “deficiencies” that keep business sites from connecting with their audiences in a more fulfilling way.

Sometimes the problems we see are unique to a particular site … but more often, it’s the “SOS” we see across many of them (if you’ll pardon the scatological acronym).

Broadly speaking, issues of website deficiency fall into five categories:

They run too slowly.

They look like something from the web world’s Neanderthal era.

They make it too difficult for people to locate what they’re seeking on the site.

Worse yet, they actually lack the information visitors need.

They look horrible when viewed on a mobile device — and navigation is no better.

Fortunately, each of these problems can be addressed – often without having to do a total teardown and rebuild.

But corporate inertia can (and often does) get in the way.

Sometimes big changes like Google’s recent “Mobilegeddon” mobile-friendly directives come along that nudge companies into action. In times like that, it’s often when other needed adjustments and improvements get dealt with as well.

But then things can easily revert back to near-stasis mode until the next big external pressure point comes down the pike and stares people in the face.

Some of this pattern of behavior is a consequence of the commonly held (if erroneous) view that B-to-B websites aren’t ones that need continual attention and updating.

I’d love for more people to reject that notion — if for SEO relevance issues alone. But after nearly three decades of working with B-to-B clients, I’m pretty much resigned to the fact that there’ll always be some of that dynamic at work. It just comes with the territory.

For the past (nearly) 20 years, the biggest thing that’s kept the Internet free for users is advertising – banner display advertising in particular.

Bloggers and other online publishers large and small rely on revenue from web banner ads to fund their activities. That’s because the vast majority of them don’t have pay walls … nor do they sell much in the way of products and services.

Because of this, the temptation is for publishers to serve up as many display ads as possible on each page.

It’s not unusual to see web pages that tile ten or more ads in the right-hand column. Usually the content of these ads has no relevance to readers, and the overall appearance isn’t conducive at all to reader engagement, either.

And that’s the problem.

Because of conditioning, people don’t even “see” these ads anymore. The advertising space has become one big blur – as easy to gloss over as if the ads weren’t even there to begin with. (When’s the last time you clicked on a banner ad?)

Attempts to come up with other display advertising types – pop-ups and pop-unders, animations and other rich media, skycrapers and so forth – haven’t done much to change the picture. Indeed, they’re so ubiquitous – and so predictable – we don’t even consider the ads to be annoying anymore; they’re just part of the “décor.”

I’ve blogged before about how clickthrough rates on banner advertising are bouncing along in the basement, making them less and less valuable for advertisers to consider placing. And ads that are priced on a pay-per-click basis can’t be giving advertisers much in the way of revenue either, since relevance and engagement rates are so abysmally low.

The bottom line is that we now have a “lose-lose-lose” situation in online advertising:

The idea is that famous brands could begin using their well-known monikers to further distinguish their activities on the Internet. ICANN’s spokespeople are on record claiming that the new guidelines will “usher in a new Internet age.”

Well … not so fast. The more people have been looking into this scheme, the less they like it. One of the biggest issues is the “pay to play” aspect. Unlike the days when people could purchase a domain name for just a few dollars … then squat on it until someone was willing to pay hundreds of thousands to use it, the cost to secure a new domain suffix like .pepsi or .hyundai will start at ~$185,000 … and go up from there.

That’s not chump change. But here’s the thing: For securing a famous brand name as a top-level domain name, it still represents a dandy opportunity for someone with funding (or a group of investors) to nab the “best brands” early on … then hold out to resell then name for a smart sum far greater than what they paid.

Which puts the onus back on the large companies who will feel compelled to pay the $185,000+ right off the bat – even if they have no intention of using the top-level domain name now or ever.

So it’s a very nice revenue stream to ICANN, ponied up by major international companies who don’t want the risk of having their names “hijacked” by someone bent on extortion – or worse, nefarious brand doings.

The letter also states that the ICANN scheme is likely to cause “irreparable harm and damage” to marketers, even as it “contravenes the legal rights of brand owners” and “jeopardizes the safety of consumers.”

Bob Liodice, president of ANA, has gone further in criticism of the ICANN proposal. “The decision to go forward with the program also violates sound public policy and contravenes ICANN’s Code of Conduct and its undertaking with the United States Department of Commerce,” he emphasizes.

Liodice contends that if the ICANN plan moves forward, it would create an ugly free-for-all environment in which many brand marketers would need to divert legal, financial and technical resources to applying for, managing and protecting their top-level domains … or risk the consequences.

“They are essentially being forced to buy their own brands from ICANN at an initial price of $185,000,” Liodice points out.

The sharp criticism of the plan ensures that these issues aren’t anywhere close to being resolved – and it probably puts ICANN’s anticipated January program launch date in question.

Stay tuned … ’cause it’s going to be a wild ride over the next few months!