There are some interesting new trends we’re now seeing in programmatic ad buying. For years, purchasing online ads programmatically instead of directly with specific publishers or media companies has been on a steady increase. No more.

MediaRadar has just released its latest Consumer Advertising Report covering ad spending, formats and buying patterns. The new report states that programmatic ad buying declined ~12% when comparing the first quarter of 2017 to the same period in 2016.

More specifically, whereas ~45,000 advertisers purchased advertising programmatically in Q1 2016, that figure has dropped to around ~39,500 for the same quarter this year.

This change in fortunes may come as a surprise to some. The market has generally been bullish on programmatic ad buying because it is far less labor-intensive to administrator those types of programs compared to direct advertising programs.

There have been ongoing concerns about the potential of fraud, the lack of transparency on ad pricing, and control over where advertisers’ placements actually appear, but up until now, these concerns weren’t strong enough to reverse the steady migration to programmatic buying.

Todd Krizelman, CEO of MediaRadar, had this to say about the new findings:

“For many years, the transition of dollars from direct ad buying to programmatic seemed inevitable, and impossible to roll back. But the near-constant drumbeat of concern over brand safety and fraud in the first six months of 2017 has slowed the tide. There’s more buying of direct advertising, especially sponsored editorial, and programmatically there is a ‘flight to quality’.”

Krizelman touches on another major new finding from the MediaRadar report: how much better native advertising performs over traditional ad units. Audiences tend to look at advertorials more frequently than display ads, and the clickthrough rates on mobile native advertising, in particular, are running four times higher than what mobile display ads garner.

Not surprisingly, the top market categories for native advertising are ones which lend themselves well to short, pithy stories. Travel, entertainment, home, food and apparel categories score well, as do financial and real estate stories.

The MediaRadar report is based on some pretty exhaustive statistics, with data analyzed from more than 265,000 advertisers covering the buying of digital, native, mobile, video, e-mail and print advertising. For more detailed findings, follow this link.

Ad spending continues with quite-healthy growth, being forecast to increase by about 10% in 2017 according to a studied released this month by the Association of National Advertisers.

At the same time, there’s similarly positive news from digital advertising security firm White Ops on the ad fraud front. Its Bot Baseline Report, which analyzes the digital advertising activities of ANA members, is forecasting that economic losses due to bot fraud will decline by approximately 10% this year.

And yet … even with the expected decline, bot fraud is still expected to amount to a whopping $6.5 billion in economic losses.

The White Ops report found that traffic sourcing — that is, purchasing traffic from inorganic sources — remains the single biggest risk factor for fraud.

On the other hand, mobile fraud was considerably lower than expected. Moreover, fraud in programmatic media buys is no longer particularly riskier than general market buys, thanks to improved filtration controls and procedures at media agencies.

Meanwhile, a new study conducted by Fraudlogix, and fraud detection company which monitors ad traffic for sell-side companies, finds that the majority of ad fraud is concentrated within a very small percentage of sources within the real-time bidding programmatic market.

The Fraudlogix study analyzed ~1.3 billion impressions from nearly 60,000 sources over a month-long period earlier this year. Interestingly, sites with more than 90% fraudulent impressions represented only about 1% of publishers, even while they contributed ~11% of the market’s impressions.

While Fraudlogix found nearly 19% of all impressions overall to be “fake,” its fraudulent behavior does not represent the industry as a whole. According to its analysis, just 3% of sources are causing more than two-thirds of the ad fraud. [Fraudlogix defines a fake impression as one which generates ad traffic through means such as bots, scripts, click-farms or hijacked devices.]

As Fraudlogix CEO Hagai Schechter has remarked, “Our industry has a 3% fraud problem, and if we can clamp down on that, everyone but the criminals will be much better for it.”

That’s probably easier said than done, however. Many of the culprits are “ghost” newsfeed sites. These sites are often used for nefarious purposes because they’re programmed to update automatically, making the sites seem “content-fresh” without publishers having to maintain them via human labor.

Characteristics of these “ghost sites” include cookie-cutter design templates … private domain registrations … and Alexa rankings way down in the doldrums. And yet they generate millions of impressions each day.

The bottom line is that the fraud problem remains huge. Three percent of sources might be a small percentage figure, but that still means thousands of sources causing a ton of ad fraud.

What would be interesting to consider is having traffic providers submit to periodic random tests to determine the authenticity of their traffic. Such testing could then establish ratings – some sort of real/faux ranking.

And just like in the old print publications world, traffic providers that won’t consent to be audited would immediately become suspect in the eyes of those paying for the advertising. Wouldn’t that development be a nice one …

Study after study shows that despite the best efforts of marketing specialists to engage their target audiences with interesting, memorable e-mailed content, often those efforts fall on blind eyes or deaf ears.

A just-released analysis by online presentation firm Prezi confirms this dynamic once again. The study was conducted by presentation software developer Prezi in concert with cognitive neuroscientist Carmen Simon. It found that four in five consumers forget most of what they read in e-mails after just three days or less.

Even worse, approximately half of them cannot recall even one single thing about what they’ve read.

The Prezi study went further in that it attempted to find out the reasons for forgetting the content. Here’s what’s behind all the forgetfulness:

Irrelevant content: ~55%

No motivation to remember the content: ~36%

There’s too much information to retain: ~30%

Distractions: ~18%

Stress: ~9%

The takeaway from this is that people aren’t forgetting content for “existential” reasons, but rather due to the nature of the content itself.

… Which brings us back to the challenge marketers face to make their content interesting and worthwhile enough to engage audiences.

It’s pretty well-accepted that the most compelling content possesses one or more of the following “VEEU” characteristics:

Counterbalancing these terms are words that actually depress interest and engagement. Interestingly, some of the biggest “killer” terms are ones that conjure up images of the classroom: Few people want to feel like they’re being lecture to, evidently.

According to Jay Schwedelson, CEO of marketing performance metrics firm Worldata, which conducted research based on more than 5 billion e-mails transmitted during 2017, his company’s research found that the word “training” had a negative impact (response depressant) of ~8% when used in e-mail subject lines.

The word “learn” had a similar dampening effect of ~7% when used as part of the subject line.

For the record, here is a list of some oft-used terms that turn out to be “engagement dampeners” – at least to some degree:

Remember (~11% dampening effect)

Chat (~11%)

Meeting (~10%)

Training (~8%)

Learn (~7%)

Featured (~6%)

Popular belief has it that “question-type” subject lines aren’t a very good idea either, because they introduce a sense of “low energy softness” and project a lack of purposeful action. But the Worldata analysis shows a different result, determining that e-mail subject lines presented in the form of a question tend to drive higher open rates (by approximately 10%).

There are several factors coming together that make life particularly difficult for the sports network. One big problem is the commitment ESPN has made to pay top-dollar for the right to air professional sports events, particularly NFL and NBA games.

These financial commitments are set in stone and are made well into the future, which means that ESPN is committed to high long-term fixed costs (broadcast rights) in exchange for what’s turning out to be declining variable revenues (viewer subscription fees and advertising).

This isn’t a very good financial model at all.

Which brings us to the second big factor: declining subscribers.

Since 2011, the network has lost ~15 million subscribers. So far in 2017, the network has experienced an average loss of ~10,000 people per day.

The financial impact of these losses is significant. All of those lost subscribers amounts to more than $1.3 billion per year in money that’s no longer going on ESPN’s books.

Sports journalist Clay Travis predicts that if the current trajectory of subscriber losses continues, ESPN will begin losing money in 2021. (And that’s assuming the subscriber base losses don’t accelerate, an assumption that might be a little too rosy.)

The fundamental question is why so many people are no longer subscribing to ESPN. The predictable answer is because services like Hulu, Netflix and Amazon, with their on-demand services, are squeezing cable/satellite TV and its subscription business model.

One way Disney (ESPN’s parent company) has attempted to maximize viewer subscription revenues over the years has been by bundling the network with other, less lucrative Disney media properties like the History Channel, Vice, Disney Junior and the Lifetime Movie Network. In the Disney constellation of channels, ESPN has been the acknowledged “driver” of subscription revenues all along, with die-hard sports fans being willing to subsidize the other Disney channels – often never watched by these subscribers – as the price of access.

But something else is happening now: ESPN itself has begun to lose viewers as well.

According to television industry publication Broadcasting & Cable, ESPN’s viewership rating has declined ~7% so far this year. ESPN2’s rating is down even further – an eye-popping ~34%.

Percentages like those aren’t driven by “sidebar” incidental factors. Instead, they cut to the core of the programming and the content that’s being offered.

If there’s one programming factor that’s tracked nearly on point with ESPN’s viewership declines, it’s been the explosion in “sports-talk” programming versus actual “sports game” programming at the network. As Townhall opinion journalist Sean Davis has written:

“If you talk to sports fans and to people who have watched ESPN religiously for most of their lives, they’ll tell you that the problem is the lack of sports and a surplus of shows featuring people screaming at each other. The near-universal sentiment … is that the content provider sidelined actual sports in favor of carnival barkers.”

Davis points out the flaw in ESPN’s shift in colorful terms:

“ESPN went from the worldwide leader in sports to yet another expensive network of dumb people yelling dumb things at other dumb people, all the while forgetting that the most popular entertainment of people yelling about sports stuff for several hours a day – sports talk radio – is free.”

There’s an additional factor in the mix that’s a likely culprit in ESPN’s tribulations – the mixing of sports and politics. That programming decision has turned out to be a great big lightning rod for the network – with more downside than upside consequences.

The question is, why did ESPN even go in that direction?

Most likely, ESPN execs saw the tough numbers on higher costs, subscriber losses and lower ratings, and decided that it needed a larger content pie to attract more consumers.

The reasoning goes, if some people like sports and others like politics, why not combine the two to attract a larger audience, thereby changing the trajectory of the figures?

But that reasoning flies in the face of how people consume political news. In the era of Obama and now Trump, political diehards gravitate to outlets that reinforce their own worldviews: conservatives want news from conservatives; liberals want news from liberals.

MSNBC and the Fox News Channel have figured this out big-time.

But if you’re starting with a cross-partisan mass media audience for sports, as the original ESPN audience most certainly was, trying to combine that with politics means running the risk of losing one-half of your audience.

That’s what’s been happening with ESPN. Intertwining sports with coverage about bathrooms in North Carolina, transgender sports stars, gun control laws and proper national anthem etiquette only gets your numbers going in one direction (down).

The question for ESPN is how it plans to recalibrate and refocus its programming to truly defend its position as the worldwide leader in sports broadcasting. However it decides to position itself in terms of the delivery of its content – television, online, subscription, pay-per-view or other methods – it should refocus on covering live sports events.

Not sports talk … not debate … not politics or sociology, but the sports themselves.

At one time, not so long ago, sports were a safe refuge from politics and the news. ESPN would do itself – and its viewers – a favor if it sought to recapture that spirit.

In the world of business-to-business marketing, all that really matters is producing a constant flow of quality sales leads. According to Clickback CEO Kyle Tkachuk, three-fourths of B-to-B marketers cite their most significant objective as lead generation. Pretty much everything else pales in significance.

This is why content marketing is such an important aspect of commercial marketing campaigns. Customers in the commercial world are always on the lookout for information and insights to help them solve the variety of challenges they face on the manufacturing line, in their product development, quality assurance, customer service and any number of other critical functions.

Suppliers and brands that offer a steady diet of valuable and actionable information are often the ones that end up on a customer’s “short-list” of suppliers when the need to make a purchase finally rolls around.

Thus, the role of content marketers continues to grow – along with the pressures on them to deliver high-quality, targeted leads to their sales forces.

The problem is … a large number of content marketers aren’t all that confident about the effectiveness of their campaigns.

It’s a key takeaway finding from a survey conducted for content marketing software provider SnapApp by research firm Demand Gen. The survey was conducted during the summer and fall of 2016 and published recently in SnapApp’s Campaign Confidence Gap report.

The survey revealed that more than 80% of the content marketers queried reported being just “somewhat” or “not very” confident regarding the effectiveness of their campaigns.

Among the concerns voiced by these content marketers is that the B-to-B audience is becoming less enamored of white papers and other static, lead-gated PDF documents to generate leads.

And yet, those are precisely the vehicles that continue to be used most often used to deliver informational content.

According to the survey respondents, B-to-B customers not only expect to be given content that is relevant, they’re also less tolerant of resources that fail to speak to their specific areas of interest.

For this reason, one-third of the content managers surveyed reported that they are struggling to come up with effective calls-to-action that capture attention, interest and action instead of being just “noise.”

The inevitable conclusion is that traditional B-to-B marketing strategies and similar “seller-centric” tactics have become stale for buyers.

Some content marketers are attempting to move beyond these conventional approaches and embrace more “content-enabled” campaigns that can address interest points based on a customer’s specific need and facilitate engagement accordingly.

Where such tactics have been attempted, content marketers report somewhat improved results, including more open-rate activity and an in increase in clickthrough rates.

However, the degree of improvement doesn’t appear to be all that impressive. Only about half of the survey respondents reported experiencing improved open rates. Also, two-thirds reported experiencing an increase in clickthrough rates – but only by 5% or less.

Those aren’t exactly eye-popping improvements.

But here’s the thing: Engagement levels with traditional “static” content marketing vehicles are likely to actually decline … so if content-enabled campaigns can arrest the drop-off and even notch improvements in audience engagement, that’s at least something.

Among the tactics content marketers consider for their creating more robust content-enabled campaigns are:

Video

Surveys

Interactive infographics

ROI calculators

Assessments/audits

The hope is that these and other tools will increase customer engagement, allow customers to “self-quality,” and generate better-quality leads that are a few steps closer to an actual sale.

If all goes well, these content-enabled campaigns will also collect data that helps sales personnel accelerate the entire process.

In the wake of recent election campaigns and referenda in places like the United States, the United Kingdom, France, Austria and the Philippines, it seems that everyone’s talking about “fake news” these days.

People all across the political and socio-economic spectrum are questioning whether the publishing and sharing of “faux” news items is having a deleterious impact on public opinion and actually changing the outcome of consequential events.

The exact definition of the term is difficult to discern, as some people are inclined to level the “fake news” charge against anyone with whom they disagree.

Beyond this, I’ve noticed that some people assign nefarious motives – political or otherwise – to the dissemination of all such news stories. Often the motive is different, however, as over-hyped headlines – many of them having nothing to do with politics or public policy but instead focusing on celebrities or “freak” news events – serve as catnip-like clickbait for viewers who can’t resist their curiosity to find out more.

And to underscore how many people are using Facebook versus more traditional news outlets as a “major” source for their news, this BuzzFeed chart showing the Top 15 information sources says it all:

CNN: ~27% of respondents use as a “major source” of news

Fox News: ~27%

Facebook: ~23%

New York Times: ~18%

Google News: ~17%

Yahoo News: ~16%

Washington Post: ~12%

Huffington Post: ~11%

Twitter: ~10%

BuzzFeed News: ~8%

Business Insider: ~7%

Snapchat: ~6%

Drudge Report: ~5%

Vice: ~5%

Vox: ~4%

Facebook’s algorithm change in 2016 to emphasize friends’ posts over publishers’ has turned that social platform into a pretty big hotbed of fake news activity, as people can’t resist sharing even the most outlandish stories to their network of friends.

Never mind Facebook’s recent steps to change the dynamics by sponsoring fact-checking initiatives and banning fraudulent websites from its ad network; by the accounts I’ve read, it hasn’t done all that much to curb the orgy of misinformation.

“One popular method … is tapping the competitive market for native ad widgets. Taboola, Revcontent, Adblade and Content.ad are prominently displayed on sites identified with fake news, while there are a few retargeted and programmatic ads sprinkled in. Publishers install these native ad widgets with a simple snippet of code — typically after an approval process — and when readers click on paid links in the widget, the host publisher makes money. The ads are made to appear like related-content suggestions and often promote sensational headlines and direct-marketing offers.”

So attempting to solve the “fake news” problem is a lot more complicated than some people might realize – and it certainly isn’t going to improve because of any sort of “political” change of heart. Forrester market analyst Susan Bidel sums it up thus:

“While steps taken by … entities to curb fake news are admirable, as long as fake news generators can make money from their efforts, the problem won’t go away.”

So there we are. Bottom-line, fake news is going to be with us for the duration – whether people like it or not.

What about you? Do you think you can spot every fake news story? Or do you think at least of few of them come in below radar?

Most people in business know at least one or two people who publish a blog. Chances are, they know people who blog on non-business topics as well.

Have you ever wondered what are the common practices followed by these bloggers? Speaking as someone who has published blog posts since 2009, I certainly have.

Now the “wondering” is over, because Chicago-based web design firm Orbit Media Studies has just published its 2016 Blogger Research Study, which presents the results of surveying ~1,050 bloggers about how they go about their blogging business.

Here are some of the most interesting highlights from the study:

Where do bloggers write their articles?

According to Orbit’s findings, the vast majority of bloggers are creating their content at home or at their home office:

At home/home office: ~81% of respondents cited

At the office: ~32%

Coffee shops or other foodservice establishments: ~19%

Co-working spaces: ~4%

Other locations: ~7% (primarily on trains or planes, or at a library)

What is the length of a typical blog post?

From the Orbit research findings, it’s pretty clear that the most popular blog post length is 500 to 1,000 words. (This one is, for instance.) Anything longer than that quickly migrates into the “feature story” mode:

Less than 500 words: ~21% of respondents cited

500 – 1,000 words: ~61%

1,000 – 1,500 words: ~13%

1,500 – 2,000 words: ~4%

More than 2,000 words: ~1%

Do bloggers use editors, or act as their own editor?

There’s little differentiation in behaviors here; the vast majority of bloggers report that they edit their own work. An even greater ~91% of the survey respondents either edit their own work or use an ad hoc review process. Bottom line, most blog posts have never been seen by anyone other than the author before going live:

Edit own work: ~73% of respondents

Show it to one or two people: ~30%

Use a formal editor: ~12%

Use more than one editor: ~3%

How long does it take to write the typical blog post?

The responses ranged widely, but the most common length of time is between one and two hours:

Less than 1 hour: ~17% of respondents cited

1-2 hours: ~37%

2-3 hours: ~20%

3-4 hours: 13%

More than 4 hours: ~13%

Are bloggers writing for other people besides themselves?

Generally speaking, bloggers are writing for their own publication, but there are many instances where bloggers are writing for clients as well.

75% – 100% of blogger’s posts written for clients: ~9% of respondents cited

50% – 75%: ~6%

25% – 50%: ~9%

5% – 25%: ~13%

1% – 5%: ~18%

0%: ~47%

How are bloggers driving traffic to their posts?

Two words: social media. Direct e-mail marketing is also a common technique, as is search engine optimization:

Social media marketing: ~94% of respondents cited

Search engine optimization: ~51%

E-mail marketing: ~35%

Influencer outreach: ~15%

Paid services (SEM/social media advertising): ~5%

The high SEO figure is hardly surprising, considering that bloggers are, by definition, focused on writing inherently interesting, newsworthy content.