The issue has been near-beaten-to-death in the United States. In fact, the FTC just concluded a long investigation involving hundreds of thousands of pieces of potential evidence that Yelp attempts to pressure businesses to pay for listings in exchange for suppressing negative reviews, but failed to find a smoking gun.

It’s easy enough to sympathize what business owners are going through, of course, because it feels real to them. No one likes negative reviews. I often feel like some of the negative reviews on TripAdvisor or Yelp are unfair. One lady crazily calls out a B&B proprietor for his “heavy drinking,” when hundreds of other guests of the lodgings and restaurant (including us) saw him working hard and having no issues with partying, day after day after day. “We are open, warm people and we gather around the fireplace in proximity to some guests, and I drink one beer because that’s genuinely how I live, and this is the thanks I get, a public shaming?” It’s no fun being a hardworking small business owner some days.

But more to the point, the narrative of Yelp harassment feels quite real, too, even though it isn’t.

The illusion of some sort of conspiracy is easy enough to explain away with basic statistics, and that breaks down roughly into two easy points.

First, the law of small numbers. I think many business owners would simmer down and accept the pattern of their reviews and ratings if they had large numbers of reviews. The distributions would revert to an average and the pattern would seem very steady. It’s a basic fact that in small sample sizes, results can be much more volatile. You’re more likely to be taken aback by the numbers when you don’t have enough numbers to work with yet. The business owners that have two glowing 5-star reviews and no others certainly aren’t complaining. Those who have two terrible reviews and two good reviews believe something is fishy, because the result is extreme. Yeah, but that result would even out given enough time. It’s the law of small numbers, pure and simple.

I’m not sure what principle of statistics this illustrates, but let’s call it “Yelp calls everyone.” Yes, Yelp does call on business owners to attempt to sell them upgraded listings. This isn’t selling the ability to suppress negative reviews. Some business owners have said “they imply it.” How do you know? If Yelp calls everyone, then eventually they’re going to call on someone when they’re going through a crisis of “oh no, I got a couple of really horrible reviews,” based on the law of small numbers (applying to them, possibly randomly).

Statistics are a bewildering thing sometimes. There are small numbers at play, and very large numbers, and lots of potential randomness. Out of that, we see coincidences, because we’re wired to create narratives to explain events. But perhaps it’s time to put down the tinfoil hat in this case.

With the holiday rush over, many advertisers will be resting up from — and taking stock of — the past six weeks of frenzied bidding, higher sales, and hopefully strong returns. But we also have to roll strongly into Q1 and do the many things necessary to succeed in a more stable seasonal environment. Those to-do lists could be long; I don’t intend to cover the whole field in a blog post.

Here today, I’d like to focus on a single underestimated weapon in the PPC advertiser’s arsenal: bidding lower.

Raising and lowering bids is such a commonplace activity that it’s all too easy to forget that more than one thing happens when you do so. When it comes to lowering bids, you accomplish not one thing, but four (!). Let’s assume you lower a few — half-dozen or so — bids in a single ad group on keywords with decent volume. What happens? (Extrapolate that more broadly if you do it in many parts of your PPC account.)

(1) Quite obviously, you immediately lower the cost-per-click (CPC) on that bucket of keywords; you can assume that conversion rates to sales will be roughly unchanged. Your CPA (cost per acquisition) is thus immediately lowered, and ideally, brought in line with targets.

(2) Because there is generally a smooth curve that leads to lower ad positions and possibly reduced impression share as you lower bids, you receive fewer clicks, and thus (while, to be sure, making fewer sales) spend less in that bucket (ad group). Such adjustments have the effect of reducing the proportion of budget you allocate to under-performing buckets within the account. Presto, then: this simple act leads to better budget allocation.

(3) If you do enough of this, your PPC budget declines as a proportion of your total marketing and IT budget. If that’s occurring because you’re taking steps to improve the profitability of your PPC campaigns during slower seasons or in softer portions of your PPC accounts, you’re improving the reputation of PPC as a performance channel, and ensuring that it has the respect of management when it comes to opening up more budget in stronger seasons or for high-performing keyword opportunities.

(4) As soon as you stop overbidding and chasing PPC auctions to a place outside of your profitability targets, your lower ad positions leave some other advertisers in positions higher than you. They may pay a few cents more per click on those keywords, and suddenly find themselves getting served more clicks on broader matches and in underperforming sectors of their accounts. Ideally, they’ll panic and overreact (develop a negative attitude towards PPC in general, pause campaigns, etc.). But even if they act rationally, they’re likely to drop their bids to adjust in much the same way you did, which drops their ad position, and reduces their impression share, reduces cost pressure in the auction. Meaning you’re on track towards hitting CPA targets and maintaining good click volume.

After any frenzied period of bidding, it’s worth remembering that “what goes up, must come down.” Sure, we all want to grow. But for sustainable growth, don’t forget the power of a lower bid. Be patient! Other advertisers just may follow suit.

P.S. I know, I know. This isn’t what our friends at the search engines want to hear. But I hope they’ve got enough of our coins in their jeans by now, from their record profits in 2014.

One common misconception that cropped up, and you see it often with accounts of today’s online review sites, is an assumption that you should see about the same number of negative reviews as positive reviews. Seriously? Even with a low-barrier-to-entry task like writing a short novella or opening a smoothie stand, you rarely see an even distribution. Those who take the risk to devote their time (or lives) to an enterprise, unsurprisingly, typically get a healthy number of 5- and 4- star reviews mixed in with a small number of complaints.

Review sites — at least most of the successful, far-reaching ones — aren’t simply places we as consumers go to wallow in negativity and complaints. Take TripAdvisor, a website that I suspect a healthy majority of us use and support, despite the inevitable warts. Since they created a designation called an annual Certificate of Excellence, determined by the volume of reviews over a certain threshold of positivity, you see many proud businesses displaying their certificates. Surely this helps them win customers. All this means is that TripAdvisor is pro-business and pro-consumer, possibly in just the right measure.

Some people are determined to smell a rat in all this. But I can assure you that HomeStars, TripAdvisor, and Yelp do everything in their power to sniff out fake reviews, and to develop richer and richer content so that consumers can find appropriate fits with businesses that suit their needs and tastes.

Think about what kind of world we would actually be living in if websites like HomeStars were strewn with a huge number of negative reviews, instead of the current situation that calls out a “few bad apples”. That would indicate that homeowners were getting ripped off right and left. We’d be living in a completely lawless and unaccountable world. Let’s remember, amid all the scary yet entertaining horror stories we see on reality TV (Holmes Makes It Right, etc.), that there are tons of reputable companies out there that do depend on maintaining their reputations long term. If anything, many of the contractors I’ve met and worked with don’t get enough praise, online or elsewhere. And yes, of course, this makes it vital that we set up systems to call out the bad apples.

Turning to my favorite subject, food, again, when you think about writing food reviews, isn’t it mostly to reward entrepreneurs and chefs who have taken the trouble to create a great experience for you? If Yelp (and life) were all about every second meal deserving a one-star rating, we’d all be walking around with food poisoning half the time. Our life expectancy would be significantly shortened. Mostly, we want to trade positive restaurant reviews. Sometimes, we want to say a place is overrated. A small percentage of the time, we’ll want to give thumbs-down. Personally, though, unless a business owner has really wronged me, I’ll simply reward them with silence. As a community, we can also weed out the bad guys by diverting plenty of positive attention to the good guys. So the bad guys might only get business from, well, folks who never read reviews.

I have to admit, when I read restaurant reviews I’m sometimes baffled by the negative ones. I’ve been to the Loving Hut in Toronto many times. The food and the service are fantastic. For some reason, though, one reviewer believes the food at this location just doesn’t taste as good in Toronto as it does in San Francisco. For enthusiasts of this location, does it even matter? The Loving Hut originated in France. I can guarantee you that the Paris location will taste more romantic and more expensive than the Toronto version does, too. However, for a place I can walk to from the office and get pretty much any vegan dish imaginable, it gets my five stars. The negative review here is simply misleading, possibly downright crazy.

The trustworthiness of review content is paramount, of course. Review sites do no one any favors if they’re biased, or “on the take.” So why is it that we assume that a positive review is a biased review? As a customer, I’m often biased in favor of a trusted vendor. That means I’m pro-business. There’s nothing fake about that.

Earlier this year, we were sad to wind down a relationship with an eight-year client. It wasn’t unexpected. Few projects take eight years to complete, and despite the need for close and meticulous ongoing campaign management, sometimes when you’re in bed together for eight years, it’s perceived that you’ve run out of ideas. We were instrumental to taking the company from below $500,000 in annual revenues to above $50 million. We’re proud of that. We did our part. We were also frequently blown away by the client’s drive, guts, and innovation.

In addition to inviting us to come by for a barbecue at a future, unspecified date, the client reflected: “I don’t know if you outgrew us, or we outgrew you, or what.” It’s probably neither. They grew 10,000% on our watch, to be sure. But then again, we work for much larger companies than them. Size isn’t the issue per se, but very rapid growth — compounded — can leave you with a “floating in space / uncharted territory” feeling. That intimate one-to-one casualness that works for any very small organization is, on one hand, calming to the nerves, but on the other hand, it can be inappropriate to the wider universe you’ve now found yourself in.

That’s nothing, though.

When I reflect on scenarios of particularly rapid growth from a little cult brand into a major publicly-traded enterprise (or becoming part of a conglomerate), and those entities, in turn, working with a partner, employee, etc., I think of companies like Lululemon or Ben & Jerry’s, and who represents them digitally. One day, tiny and friendly and not corporate. Seemingly overnight, very corporate. Ben & Jerry’s has 236,000 followers on Twitter. Twitter informs me that I “might want to follow these similar accounts: Walgreens and Taco Bell.” We’re not in Vermont anymore.

No commentary intended on those particular Twitter accounts, but the youth of social media — and in particular, the youth of Twitter combined with its recent mega-growth and launch of an IPO (now, suddenly, it’s the ultimate corporate publishing entity, like Google and Facebook, now beholden to Wall Street) — means that community managers comfortable with witty, intimate repartee with a handful of friends are now being watched by much of the world.

In eight years, Twitter went from a “little startup” to a $32 billion company. The potential of the platform is still so well-regarded that it has recently been able to raise a large amount of debt on favorable terms. It’s rumored that the company may then go on an acquisition spree, solidifying its hold on digital media attention.

Lululemon went from a cult yoga-pants-with-slogans vendor to a global $6.5 billion brand. In working in tandem, these now-large entities, now floating somewhere in vast economic ground-control-to-major-Tom space, are still often employing the skills of people who treat the whole thing like a casual romp. That’s understandable and maybe desirable in a lot of ways. Think about how a major TV or radio personality would have to prepare themselves, pretending just to speak to a handful of people, or addressing a studio audience of 200 people.

It’s disorienting to be in a new place so quickly. Many community managers and the like would probably choke if they thought about what was really now going on: “I’m at the epicenter of public relations for a $7 billion company. I publish on a network that is so valued by users that they’re willing to endure advertising that is so valued by ad buyers they’re willing to pay rates high enough to make Twitter worth $32 billion.”

But that’s the reality. No one outgrew anyone else, but growing together can be a bewildering experience. Cue Wile E. Coyote out past the ledge, standing on nothing, holding up the little sign pleading “Help!”

In this regard, Twitter executives seem to play a hybrid role. Maybe it’s just the lower headcount, but advocates like Stewart have to walk the fine line of explaining the ad units to potential buyers, while urging companies to properly ‘get’ social media and to (yes) focus more on their followers and their unpaid content than on their ads. It’s as if a Twitter exec has to be at once Matt Cutts, extolling the value of quality content, *and* an ad sales exec, explaining how ad units work and how they sort of blend in natively with the user experience. Matt Cutts *never* does the latter.

Twitter’s stance is smart. It needs companies on board, and it needs them to have confidence that their non-paid content is valued. It also needs these companies to earn followers, and the only way you earn followers is by adding value to people’s lives, not by spamming them with low-quality ads. Mitch Joel calls it utilitarian marketing (though I think I prefer the word ‘useful,’ since utilitarian has at least two other connotations… perhaps Godin’s Free Prize Inside is also worth a re-read in this context).

Yes, you could probably get by running a service with a lot of civilians posting content and companies focusing almost exclusively on advertising. But it turns out that companies are really good at producing engaging content — life’s not so different on Twitter than it is out there on corporate websites (without which Google Search would have to sift through lower quality and less content overall in its attempts to find nuggets of quality in the massive database of content it has indexed).

Here’s the same theme spelled out on Twitter’s ad sales page. Unremarkable? After all, third party services like Hubspot and Hootsuite provide similar advice. Well… can you imagine a page on Google’s ad sales side telling folks how to “write great content”?

Because Twitter is newer, and it isn’t the Web, it needs to continue to build more “there, there,” or there will be nothing compelling to serve ads against. Companies are being asked to build Twitter’s platform *and* pay for the privilege of advertising on it. Fair? Unfair? Today’s Tom Sawyer he gets by on you? Well, companies will do both if it’s to their advantage.

What’s also interesting is the diversity of approaches companies may take. Some might choose minimal engagement and using Twitter ad formats primarily… to build brand awareness or even as a performance marketing vehicle. Others will be good social media citizens and tweet just the right amount, more often than not connecting at some deep level with users… like my friends at Fiesta Farms, a grocery store in the Seaton Village neighborhood of Toronto. I love them to death.

Political leaders, nonprofits, and government organizations also seem more suited to the Twitter platform than to some other means of getting the word out online. They can tweet events or blog posts or little nuggets of inspiration to a growing list of followers, and also natively post promoted content to accelerate their user base. Compare that to the awkwardness of some political leader trying to bid on keywords in AdWords to accelerate their outreach.

Financially and in terms of volume of content, Twitter may succeed against some of the naysayers. It’s remarkable how much they’ll be relying on companies (and leaders, and organizations) to build both their content base and their revenue base.

What are the chances of success? Reasonably good, given the emergence of legions of professional community managers and digital advertising practitioners looking for engaging assignments, combined with generalized disaffection with behemoth Facebook and the faux feel of working with Google+. These professionals’ execution will be pivotal to Twitter’s future; certainly Twitter only needs a tiny fraction of corporate advertising budgets to take a flyer on their platform to enjoy substantial revenue growth. And never say no to those nonprofit and gov’t dollars, either .

I’m certainly not qualified to work through the constitutionality of any proposed legislation, or legal judgment, that might seek to uphold privacy rights to the extreme of making it a publisher’s job to hide publicly-available information at an individual’s request, even in North America, let alone Europe.

But a little common sense, please. Search engines aren’t perfect, but if we search diligently using the right queries, they remain an oasis of transparency in a world of manipulated communications. This is precisely why you have to buy advertising if you want to direct search engine users to a commercial message they might be interested in, for example. If you didn’t have that separation between church and state, search engines would become a blizzard of commercial messages, and would become, for all intents and purposes, useless to anyone seeking anything other than spam (which is precisely no one).

And now we’re supposed to throw into the hopper shiny happy re-spun personas for convicted pedophiles and various rabble trying to shake off vehicular manslaughter charges?

You don’t even have to go to extreme examples like these to get a sense of how useful search engines can be to us in protecting ourselves against bad partners and bad decisions.

Here’s one search I did: Three years ago, we nearly leased office space in Liberty Village. It was half of a unique, standalone building. It was a lot of space for the money and some interesting features like a rooftop terrace. Then I Googled the landlord. Not really a professional property management company. OK, but that means the individual should at least be reliable. Turns out that wasn’t the case. He’d had one of the messiest divorces in human history, it seemed, with the court wrangling over dividing the assets taking something like twelve years. After the court decision, of course, he refused to pay much of what was owing. So he was issued a court order to pay. Ignoring a court order, after a certain length of time, can land you in jail. A second and third court order were issued. In the third hearing, the judge was fairly yelling at this man and his legal counsel that a third court order is not a matter you can debate, they were two years and two court orders past the point of any appeals or even semantic arguments in the courtroom, and that he could have been jailed two court orders ago. Overall, the record painted the picture of a vindictive, petty jerk. And certainly, one who could never be trusted to honor any legal document.

So we lucked out there. We found different office space. All thanks to the search engine.

The status quo is that a search engine and publicly-available information became this man’s problem. Imagine flipping that so that this individual, and hundreds of thousands like him, become Google’s problem. Hell, even Google’s largest advertisers don’t get that kind of red carpet treatment.

Search engines aren’t perfect, but they provide a vital counterbalance to spun information and shady dealings. In that regard, they play a role similar to the role that has always been occupied by the (also flawed, but vital) free press.

On the strength of these results, the company will shift 25% of its print ad budget to digital spending in the coming year.

Aha! Targeted digital ads work better than paper flyers very few people read.

But one suspects there are still serious problems in the measurement methodology. Was this 12% sales increase over the prior period, or year-over-year? Was it same-store sales or part of a national measurement that increased the overall square footage of the company? Did any other variables affect the result? Was anything done to attempt to directly attribute sales back to the Facebook campaign? How much did the campaign cost? What is the estimated ROI?

Certainly, the shift of those offline flyer dollars over to digital is long overdue.

So… what about online sales? Sport Chek has a website, too. And after visiting that website, I saw a remarketing ad, so they’re obviously doing what they can to drive traffic there. So why not talk about advertising spends that can be easily and directly attributed to the exact source of the spend? When is someone in retail going to come out and admit that all those retail stores, constantly changing layouts, staff, real estate, and taxes, are very costly overhead? Is it the CMO’s job to embrace low-margin sales in agnostic fashion when there is also a full web presence waiting to make the company higher profits? Is it the CEO’s job to look into shifting the proportion of online sales? In financial reporting after quarterly earnings results by Canadian retailers like Sport Chek and Lululemon, why is it so rare to hear anyone ask about the growth in that proportion of online sales? (Fortunately, it’s becoming more common. In this Bloomberg report from 2012, Lululemon’s CEO indicated that online sales drove 14.3% of the company’s revenue — not too shabby. That metric should be front and centre in far more conversations about digital ad campaigns.)

Some companies, while shifting their “dollars to digital,” are still measuring their results like they used to measure their results from TV ads — or flyers. The lower-hanging fruit awaits a fuller commitment to the power of digital, accountable ad spending.

Last year, while many other advertisers were holding protests and doing the Chicken Little dance, I decided early on to say something different: Enhanced Campaigns were going to make things… yes… better.

A year later, and we’re still living with the pain of something that was taken away: bid control over the tablet channel.

What we gained was granular control over mobile bids at the ad group level if desired, flexible and quick geobidding factors, and improved dayparting.

I sincerely doubt many advertisers, after having time to reflect on all this, would go back to having three times the number of campaigns just so they could control bids by device type, and three, twenty, or fifty times that many again so they could nimbly bid accurately by geo segment.

Sure, it’s not all great now. Costs have risen, device switching is rampant, and competition is tougher than ever. Google’s Quality Score remains opaque and often punitive, inaccurate, capricious, or simply profit-driven.

Google still allows low-quality advertisers and arbitragey “partner search engines” to pollute the paid search results, despite claiming crackdowns. In some countries, fake comparison engines and the like still pollute paid search results, allowing certain advertisers to de facto double-serve. The playing field should feel more level for conscientious players. Enhancements like ad extensions are welcome, except that advertisers want to game them, and like a planet where Mariah Carey’s fake hair is the norm, you look bad if you don’t play along. Just more busywork on the PPC treadmill. It’s [nearly] enough to drive a man to content marketing, or “what used to be called SEO.”

But if user behaviors in a multi-screen world are only adding to our woes and ROI challenges, isn’t that really an opportunity — if we got back to having more direct knowledge of user conversion patterns? We’ve got to be given the tools to do our jobs better in this emerging environment. And in a multiple device world, do tablets or smartphones really cause, or not cause, the majority of conversions on their own?

What if we knew, instead, that a particular user converted? It’s the user that matters, not that “tablets” or “phones” cost us x amount of money and “do or don’t convert.”

Ever looked at one of those attribution reports (Search Funnels and the like) that are supposed to help you attribute better from the first click through other influences and impressions, right through to the last click before conversion? They’re often pretty useless, especially for long sales cycle or collaborative purchase decisions. For the most part, all we’re still getting are the really obvious purchase paths from people who did all their thinking on the same device in a relatively short period of time. So we’re left with the impression that last-click attribution is still “pretty accurate” and as for complex paths to purchase justifying a more nuanced approach to your spend, “don’t get your hopes up.”

What if we could, at least, cross the chasm among devices (and yes, even between work and home), over an extended period of time? What if logged-in users supplemented the attribution value we get from cookieing users anonymously?

So that’s Fervent Wish #1: “lost” user patterns are united across devices over a longer period of time, and we’re given the option to easily incorporate our preferred attribution models into a scoring system or the like, to assist with bidding.

The second trend we know is about to explode — because it’s sitting there right in the stats for Display campaigns and is somewhat already available in Bing Ads — is accurate (not extrapolated or guesstimated) demographic information. Google isn’t really telling us exactly how they can get gender and age information for so many users today as compared with the weak numbers they used to give us a few years ago. But I’ve seen some accounts where about 76% of these demographics are deemed certain by Google, and only 24% are unknown. “That’s great,” you say, because it might make us a little less wary of advertising on the Display Network, “but I can’t apply those factors for Search.”

So that’s Fervent Wish 2: that Google will give us the ability to apply demographic bid factors to Search campaigns, should we so desire. Sound like a small thing? I’ve seen innocuous, “unisex” looking products that convert twice as well for men as for women. Twice as well! That’s not a small factor. But is that because, maybe, one sex is more impulsive than the other, and we’re just missing information on device switching, longer consideration cycles, etc.? No problem! That ought to be covered by Fervent Wish #1, which amounts to better cross-device, home-vs.-work attribution (though it wouldn’t cover lengthy familial discussions across multiple user accounts).

Many of today’s campaigns face challenges because of the competitive environment. Sometimes success is much closer than we think. Don’t throw in the towel… persevere!

Many struggling accounts may be in better shape than they look, because they’re just not quite getting the credit they deserve. First and second and other high-funnel clicks, in particular, need to be given more weight. Display campaigns — and display impressions — need to have better attribution models.

I’m inclined to believe this because of some glimmers of hard evidence I’ve seen. For example, we experiment with Remarketing Lists for Search Ads (RLSA). We’re able to coax very high conversion rates out of some really broad terms about lifestyle, product categories, etc. Let’s say it’s the word “poultry” or the word “organic”. A terrible keyword to bid on for just about anyone. But poultry is converting right and left to our remarketing audiences. And there’s no way any of those users went anywhere near the poultry section of the website on their own, so the reason they knew about it in the first place was our regular PPC bids on high-funnel poultry words. Which, by the numbers (last-click attribution), convert terribly. So all it takes the poultry crowd to convert is two or three clicks to convert instead of one (our latter bids, much higher, because we’re now bidding on a much smaller, more qualified audience). Instead of a total of $42, let’s say, for the clicks that caused the $170 in sales, we actually paid a total of $75, which is still on target.

The point is we never have enough hard evidence that those initial low-performing poultry words really are working. We’d love to know.

On top of the attribution issue, then, as mentioned, many advertisers can benefit from layering additional detail such as demographics on top of geo, time of day, and other factors they’re already adding to accounts.

The majority of advertisers, I suspect, still come to play with the overconfident notion that they can fiddle around with a few keywords and win business. Here’s to you, the obsessed ones, who will take advantage of the newly available features and squeeze out surprising ROI in a difficult, competitive online auction.

If neither of the two fervent wishes are granted, of course, you can disregard everything I’ve said here (for now).

And P.S. — if Google provides a bold step forward in crossing the device chasm, it changes the conversation about how effective the “channels” are, so it changes the whole context for the advertiser desire for “tablet control”. And it also allows us to remarket more sensibly, given that remarketing “tablet to tablet only” or “smartphone to smartphone only” is oddly siloed thinking in a multi-screen world. Do I expect Google to restore “tablet only” bidding capability? Hmm… I suspect they’re working more long term than that. Where is that puck really going? When only 30% of conversions or less are coming from desktops and laptops, when only half of those are working with even reasonably certain attribution models, won’t we need appropriate tools and models to allocate our spend better across all online channels and segments?

For those of us who were around then, it would be convenient to say we remember the day it happened, or the year it happened, and we warmly embraced that hypertext world he created from Day One. For most of us, though, it was a little more complicated. For most of us there might have been a delayed reaction, dismissing “whatever that is” and carrying on using whatever tools we had at our disposal.

Even without the great advance of the Web and its amazing hyperlinked, standardized architecture, a relatively small elite relied on Internet access. Most such individuals were connected with universities and research centers — true “cyber-geeks” who used various tools to chat, connect, and send files.

The Internet and the Web that came along on top of it began as earnest, elite media. Its geeks were rare, not hanging around in every cafe. And it wasn’t all self-referential and juvenile. Scientists used it to send each other research papers, and comment on them, etc.

When did the Web move to being a truly mass medium? When Webcrawler made it easier to search? When Yahoo came along? When Amazon reached some interesting sales benchmarks? When a good version of Netscape and a faster modem finally made the thing more fun and interactive? When Google took over as the search leader and was able to maintain a sustained age of dominance due to its harder-to-game search algorithm and its uniquely subtle ad ranking formula that didn’t bother search engine users nearly as much as some might have feared? When Google acquired, subsidized, and nurtured YouTube? When Facebook — more like an old AOL or bulletin board walled garden than most things we associate with ‘The Web’ — became ubiquitous?

Yes to all of the above, and much more besides. Much online activity now transcends the Web architecture. It’s heavily interactive – sometimes decentralized, sometimes not.

In the end, it’s a fiction that a standardized architecture and “no power center” leads to a nirvana of openness and freedom. Power centers tend to crop up anyway. Anarchy’s rules shouldn’t be taken at face value.

Likewise, a robust infrastructure of interstates and train tracks, and a nation built on a tradition of small farms and hard work, didn’t mean that it turned out to be a fair fight between “Big Food” and the average person’s waistline, or (call it) the Slow Food Movement. Take a good look at Michael Moss’ Salt Sugar Fat for an interesting look at how one “side” — huge companies with everything to gain — literally “take aim” at the mass market, trying to get them hooked on something (convenience in general, but most notably, sugar, which profoundly affects health) that flies in the face of common sense. Today in Wal-Greens near the checkout, I saw probably the biggest selection of candy I’ve ever seen in a drugstore. The proportion of these items as a proportion of all the rest of the stuff in the store has grown inexorably over the years. One item, Swedish Fish, is billed as “A Fat-Free Food” (!). There was also one of the largest selections of cigarettes I’d seen in awhile, behind the counter — much of them different types of Marlboroughs. (Great business model, right? Profit from the cause(s) of disease, then profit from the cure(s).) Irritatingly, they’ve discontinued beer and wine sales at that location. As I checked out, the clerk signed off with the company’s “signature greeting”: ‘Be Well.’

Back to the free and open Web. Personally, I’m nostalgic for the good old days when I got to send and receive little emails and files through a hard-to-use medium on a slow connection. It felt quiet, sane, and sheltered from the urges of commerce. Today, I work in the unsheltered version of that, making a living from it. Somehow, we aren’t all able to parlay our higher education into quiet lives as poets or professors. (The food science campuses, for their part, are crawling with clever Ph.D.’s.)

Over the years, a few privileged folks (read a bio of Steve Jobs and that will certainly be confirmed) have the type of lifestyle where they can really “dig in” and embrace the ins and outs of nutrition, cuisine, etc. Those who have dined at the eateries of elite chefs like Alice Waters are about as rare as those who have box seats for the Knicks or Lakers game. In spite of all of today’s talk of the rise of a foodie nation, obesity has reached epidemic proportions.

In that industry, there were versions of “Don’t Be Evil” at one time, too. But as competition heated up, as conglomerates grew to hate one another, and as Wall Street financed growth through M&A, the gloves came off, and the concern for health also waned. Have a look at this lengthy excerpt from Moss’s book:

Knocking an hour or two off that (pudding-making) ordeal would give a competitor a decisive advantage, the General Foods executives realized. They asked Clausi to get there first by inventing an instant formula.

Some food creations happen in a flash. Most take months. This one took years. From 1947 to 1950, Clausi and his team cooked, ate, and breathed pudding. They tinkered with its chemical composition. They played with its physical structure. General Foods preferred using cornstarch as the base, but Clausi’s crew looked at potatoes and every other starch they could find, including the sago palm, which Clausi tracked down himself after traveling, via prop plane, to Indonesia. Nothing worked. The problem was that, at the time, General Foods was staunchly committed to pure ingredients. [Emphasis mine.] Food additives such as boric acid, a preservative, and artificial dyes were showing up in more and more items on the grocery shelf, but General Foods knew that consumers had deep trepidations about these ingredients, especially those that were synthetic. Clausi’s marching orders, then, had been quite strict: He was to create his instant pudding using only starch, sugar, and natural flavorings.

That all changed in the summer of 1949 when he returned from two weeks [sic] of fishing in the Catskills to find that all hell had broken loose. A competitor, National Brands, had filed for a patent on instant pudding by using not one synthetic but a blend of synthetics, including an orthophospate that was usually added to drinking water supplies to prevent corrosion… a pyrophosphate, which thickens foods; and water-soluble salts like calcium acetate, which extend shelf life. On his desk the first day back was an envelope marked ‘Open Immediately’. Inside was National’s patent application. And when he went to see his boss, the section head of desserts, Clausi was told that the rules had changed, public fears be damned. “He said, ‘Marketing wants us to outdo the competition,’” Clausi told me. “That it was urgent. And when I asked if it still had to be 100 per cent starch, he said, ‘That’s all out the window. Just come up with an instant pudding that can be made in thirty minutes.’ Overnight, the constraints were removed.”

As Seth Godin often argues — from his (and my) perspective — “elitist” isn’t a dirty word. He recently decried the “fabled Oreo tweet” and “the now-legendary Ellen selfie” as further dragging thinking people into a morass of trivia.

But mass markets are massive profit centers for somebody. It gives somebody (many somebodies) a great deal of incentive to sit in their glass and steel campus bunker coming up ways to ‘optimize’ (that’s what the food scientists call it, believe it or not) products to make them addictive to consumers — to find their ‘bliss point.’ Food scientists began using multivariate testing methods as early as the 1950′s, at General Foods. They’ve gotten very good at it indeed.

It’s hard not to see a parallel with today’s digital world. While I (and Mr. Berners-Lee) may have the wherewithal to dine more frequently than the average person on the digital and informational equivalent of raw broccoli, hummus, and spicy cashew nuts, that’s not, seemingly, where a lot of this is headed. Giants in our industry, just like those giants in the food industry, make more money if they remove choice, and if openness is just a slogan.

It’s not all bad. For starters, much like Jell-O Instant Pudding, our digital life is now incredibly convenient. Didn’t it used to suck when we had to struggle with finding a street address, or just take a chance on a restaurant without checking Yelp? Indeed. We’ve reached our bliss point. It’s a brave new world.

There’s more to say on the issue — on the part about it not being all bad. Along shortly will be the Part 2 of this post.

Everyone in PPC (and related display advertising) today knows that ad rank is determined in part by your bid, and in part by a multifaceted relevancy measure called Quality Score.

What many still don’t realize is that Quality Score doesn’t merely determine rank on the page (of the ad listings), but also impression share or the frequency of delivery (Google currently refers to this as “ad auction eligibility”). With lower Quality Scores, ads may simply be shown less often, rather than just dropping down in position. Not only does that enforce relevancy standards, it provides a handy lever that Google can use to tweak its profitability. Change the algorithm a little bit, and some advertisers are forced to ramp up bids if they want fuller delivery of their ads. Google’s auction regime includes handy little “framing” features, like the estimated “first page bid” notation that implies you won’t even make it onto the first page of ads if you don’t up your bid.

There continue to be many other nuances of Quality Score that are really only taken into account by a tiny minority of advertisers:

Quality Score is calculated on the fly for every query, for all eligible advertisers in a given keyword auction. What we see in our accounts next to the keywords (if you Customize Columns to view this) are reported averages.

Quality Score is predictive until a keyword builds up a significant amount of history. Perfectly good keywords might come in at 3 or 4, and eventually go up.

Some kinds of keywords (inherently ambiguous ones) may always attract consumer-oriented information seekers, etc., not people looking for your highly specialized software. That’s why you might never crack 4 or 5 on your phrase match for “prevent phishing,” despite your perception that the keyword is relevant. But your longer phrases that include “software” in the phrase (etc.), might clock in with a 7, 8, or 10. In B2B, it’s not all about the Quality Score. It’s about doing your best to find customers at the best possible CPA. Sometimes you’ll get cues from Quality Score, but that’s about it.

Quality Score history is important, but we don’t know how far back that goes, how much of it is needed for optimal results, or how or when it degrades.

There is an available “three-factor” breakdown of “components” of Quality Score by keyword, but it is not terribly informative, since it is not clearly actionable. The factors are Expected CTR, Ad Relevance, and Landing Page Experience.

We don’t really know what the “ad relevance” component of Quality Score is, though Google refers to the keywords in an ad group not being “specific enough” to the ads. This factor may be unnecessary, given data is already collected on CTR and user behavior, but trust Google to meddle further in relevancy matters: they’re a search engine, after all!

Landing page experiences are important, but this component isn’t very actionable — it certainly isn’t typical that an advertiser runs an A/B landing page test and somehow gets usable information back about how that affected Quality Score. Indeed, the only case studies I’ve seen have cited just the opposite — a lengthy test period with inconclusive or confounding results. I do believe that a big step up in the user experience via a site redesign, testing the appropriate level of granularity for landing pages, etc., will score you a win on this Quality Score component, which might give you a slight boost in AdRank, but we’re talking about a full redesign or upgrade in UX, not to be taken lightly, and something you should probably do for all the right reasons anyway. (Regardless, there are some guidelines all advertisers should be aware of. Especially, avoid certain practices that decrease trust with users or annoy users.)

Edit an ad, lose the old ad’s Quality Score history (i.e. something resets). How harmful this is to the score for your “keyword and matched ad” isn’t known, but it’s important to understand that all testing in AdWords comes at a cost, as does a blanket change in display and/or destination URL.

Negative keywords are always a good idea, if they make their case on their own merit. Google won’t confirm how important they are as an aid to keyword Quality Score, though. As with many other factors, the story from Google is subject to change.

In case you missed the point I was trying to make, regular keyword Quality Score is nearly as mysterious as the ones that aren’t even reported, so it isn’t very actionable.

There are some obvious best practices that will probably get you better Quality Scores:

A well-organized campaign — because ads & landing pages will be more relevant to queries that the related keywords in your ad groups trigger ads on.

Avoiding self-indulgent theories that simply mean something different to the consumer than what you’re trying to put in front of them — “weight loss ideas” as a keyword, when you’re selling jump ropes. Hey, it could work, but if it persists with a Quality Score of 2, pause it. It’s doing more harm than good.

The Quality Score history component at the “URL level” is an interesting and murky way Google can reward brands, but it might also be good for you if you aren’t a big brand. It might be a way of ensuring that established, trusted advertisers get a slight boost over upstarts and tinkerers.

Ad testing: go with ROI or conversion rate related metrics when you can. But in the case of a tie, consider letting the higher CTR ad win. Some advertisers might want to go all in for CTR, if volume is much more important than profit.

Here’s a big one for me. Google explicitly states that Quality Score history at the account-wide level is a factor. We don’t know how big a factor, but it affects every auction for every keyword in the account to some degree. That makes a strong case for professional campaign management. Sloppy, messy, irresponsible, lazy, irrelevant, etc. campaigns pay some penalty. Best practices (campaign organization, meticulous testing) pay off over time. You can’t prove it with an A/B test, but the benefit it there. Google says so!

Some “advanced tactics pushers” will try to convince you that there are certain more esoteric Quality Score voodoo tactics that can give you a magical lift. They have rarely if ever proven any of these claims.

Quality Score is super-important? Yes. It’s very important to understand how it works. It is not actionable or testable in the way that many advocates claim, however.