MEDIA, TECH, BUSINESS MODELS

From valuations to management cultures, the gap between legacy media companies and digital natives ones seems to widen. The chart below maps the issues and shows where efforts should focus.

At conferences and workshops in Estonia, Spain or in the US, most discussions I recently had ended up zeroing on the cultural divide between legacy media and internet natives. About fifteen years into the digital wave, tectonic plates seems to drift more apart that ever. On one side, most media brands — the surviving ones — are still struggling with an endless transition. On the other, digital native companies, all with deeply embedded technology, expand at an incredible pace. Hence the central question: can legacy media catch up? What are the most critical levers to pull in order to accelerate change?

Once again, it’s not a matter of a caricatural opposition of fossilized media brands versus agile and creative media startups. The reality is far more complex. I come from a world in which information had price and cost; facts were verified; seasoned editors called the shots; readers were demanding and loyal — and journalists occasionally autistic. I’m coming from the culture of great stories, intense competition (now gone) and the certitude of the important role of great journalism in society.

That said, I simply had the luck to be in the right place at the right time to embrace the new culture: Small companies, starting on a blank slate with the unbreakable faith and systemic understanding that combine into a vision of growth and success, all wrapped-up in the virtues of risk-taking. I always wanted to believe that the two cultures could be compatible — in fact, I hoped the old world would be able to morph swiftly and efficiently enough to catch the wave, deal with new kinds of readers, with a wider set of technologies and a proteiform competition. I still want to believe this.

In the following chart, I list the most critical issues and pinpoint the areas of transformation that are both the most urgent and the easiest to address.

[Footnotes]

1. Funding: The main reason why newcomers are able to quickly leave the incumbent in the dust. When venture firms compete to provide $160m to Flipboard, $61m to Vox Media, or $96m to BuzzFeed, the consequences are not just staggering valuations. Abundant funds translate into the ability to hire more and better qualified people. Just one example: Netflix’s recommendation system — critical to ensure both viewer engagement and retention — can count on a $150m yearly budget, far more than the entire revenue of many mid-sized media companies. Fact is: old media companies in transition will never be able to attract such level of funding due to inherent scalability limitations (it is extremely rare to see a legacy media corporation suddenly jumping out of its ancestral business.)

2. Resource Allocation. Typically, the management team of a legacy media will assign just enough resources to launch a product or service and hope for the best. This deliberate scarcity has several consequences: From the start, the project team will be in the fight/survival mode, both internally (vs. other projects or “historical” operations); second consequence, in the (likely) case of a failure, it will be difficult to find the cause: Was the product or service inherently flawed? Or did it fail to achieve “ignition” because the approach was too cautious? The half-baked, half-supported legacy product might stagnate for ever, without making sufficient money to be seen as a success, nor significant losses to justify a termination. By contrast, a digital native corporation will go at full throttle from day one with scores of managers, engineers, marketers and sufficient development time for tests, market research, promotion, etc. The idea is to succeed — or to fail, but fast and clearly.

3. Approach to timing. The tragedy for the vast majority of legacy media is they no longer have the luxury of long term thinking. Shareholder pressure and weak P&L impose quick results. By contrast, most digital companies are built for the long term: Their management is asked to grow, conquer, secure market positions and then monetize. It can take years, as seen in many instances, form Flipboard to Amazon (which might have pushed the envelope a bit too far.)

4. Scalability vs. sustainability. Many reasons — readership structure, structurally constrained markets — explain the difficulty for legacy media to scale up. At the polar opposite, disrupters like Uber or AirBnB, or super-optimizers such as BuzzFeed or The Huffington Post are designed and built to scale — globally.

5. Customer relations. On this aspect, the digital world has reset the standard. All of a sudden, legacy media companies appeared outdated when it comes to customer satisfaction, from poor subscription handling to the virtuous circle of acquisition-engagement-retention of customers.

In the chart above, my allocation of purple dots (feasibility) illustrates the height of hurdles facing large, established media brands. Many components remain extremely hard to move – I personally experience that on a daily basis. But there is no excuse not to take a better care of customers, not to reward the risk-taking of committed staffers, assign resources in a decisive manner or induce a better sense of competition.

Every generation has its high tech storytellers, pundits who ‘understand’ why products and companies succeed and why they fail. And each next generation tosses out the stories of their elders. Perhaps it’s time to dispense with “Disruption”.

“I’m never wrong.”

Thus spake an East Coast academic, who, in the mid- to late-eighties, parlayed his position into a consulting money pump. He advised — terrorized, actually — big company CEOs with vivid descriptions of their impending failure, and then offered them salvation if they followed his advice. His fee was about $200K per year, per company; he saw no ethical problem in consulting for competing organizations.

The guru and I got into a heated argument while walking around the pool at one of Apple’s regular off-sites. When I disagreed with one of his wild fantasies, his retort never varied: I’m never wrong.

Had I been back in France, I would have told him, in unambiguous and colorful words, what I really thought, but I had acclimated myself to the polite, passive-aggressive California culture and used therapy-speak to “share my feelings of discomfort and puzzlement” at his Never Wrong posture. “I’ve always been proved right… sometimes it simply takes longer than expected”, was his comeback. The integrity of his vision wasn’t to be questioned, even if reality occasionally missed its deadline.

When I had entered the tech business a decade and a half earlier, I marveled at the prophets who could part the sea of facts and reveal the True Way. Then came my brief adventures with the BCG-advised diversification of Exxon into the computer industry.

Preying on the fear of The End of Oil in the late-seventies, consultants from the prestigious Boston company hypnotized company executives with their chant: Information Is The Oil of The 21st Century. Four billion dollars later (a lot of money at the time), Exxon finally recognized the cultural mismatch of the venture and returned to the well-oiled habits of its hearts and minds.

It was simply a matter of time, but the BCG was ultimately proved right — we now have our new Robber Barons of zeroes and ones. But they were wrong about something even more fundamental but slippery, something they couldn’t divine from their acetate foils: culture.

A little later, we had In Search of Excellence, the 1982 best-seller that turned into a cult. Tom Peters, the more exuberant of the book’s two authors, was a constant on pledge-drive public TV. As I watched him one Sunday morning with the sound off, his sweaty fervor and cutting gestures reminded me of the Bible-thumping preacher, Jimmy “I Sinned Against You” Swaggart. (These were my early days in California; I flipped through a lot of TV channels before Sunday breakfast, dazzled by the excess.)

Within a couple of years, several of the book’s exemplary companies — NCR, Wang, Xerox — weren’t doing so well. Peters’ visibility led to noisy accusations and equally loud denials of faking the data, or at least of carefully picking particulars.

These false prophets commit abuses under the color of authority. They want us to respect their craft as a form of science, when what they’re really doing is what Neil Postman, one of my favorite curmudgeons, views as simple storytelling: They felicitously arrange the facts in order to soothe anxiety in the face of a confusing if not revolting reality. (Two enjoyable and enlightening Postman books: Conscientious Objections, a series of accessible essays, and Amusing Ourselves To Death, heavier, very serious fare.)

Christensen’s body of work is (mostly) complex, sober, and nuanced storytelling that’s ill-served by the overly-simple and bellicose Disruption! battle cry. Nonetheless, I’ll do my share and provide my own tech world simplification: The incumbency of your established company is forever threatened by lower cost versions of the products and services you provide. To avoid impending doom, you must enrich your offering and engorge your price tag. As you abandon the low end, the interloper gains business, muscles up, and chases you farther up the price ladder. Some day — and it’s simply a matter of time — the disruptor will displace you.

According to Christensen, real examples abound. The archetypes, in the tech world, are the evolution of the disk drive, and the disruptive ascension from mainframe to minicomputer to PC – and today’s SDN (Software Defined Networking) entrants.

But recently, skeptical voices have disrupted the Disruption business.

Ben Thompson (@monkbent) wrote a learned paper that explains What Clayton Christensen Got Wrong. In essence, Ben says, disruption theory is an elegant explanation of situations where the customer is a business that’s focused on cost. If the customer is a consumer, price is often trumped by the ineffable values (ease-of-use, primarily) that can only be experienced, that can’t be described in a dry bullet list of features.

More broadly, Christensen came under attack by Jill Lepore, the New Yorker staff writer who, like Christensen, is a Harvard academic. In a piece titled The Disruption Machine, What the gospel of innovation gets wrong, Lepore asserts her credentials as a techie and then proceeds to point out numerous examples where Christensen’s vaunted storytelling is at odds with facts [emphasis and edits mine]:

“In fact, Seagate Technology was not felled by disruption. Between 1989 and 1990, its sales doubled, reaching $2.4 billion, “more than all of its U.S. competitors combined,” according to an industry report. In 1997, the year Christensen published ‘The Innovator’s Dilemma,”’Seagate was the largest company in the disk-drive industry, reporting revenues of nine billion dollars. Last year, Seagate shipped its two-billionth disk drive. Most of the entrant firms celebrated by Christensen as triumphant disrupters, on the other hand, no longer exist…

Between 1982 and 1984, Micropolis made the disruptive leap from eight-inch to 5.25-inch drives through what Christensen credits as the ‘Herculean managerial effort’ of its C.E.O., Stuart Mahon. But, shortly thereafter, Micropolis, unable to compete with companies like Seagate, failed.

MiniScribe, founded in 1980, started out selling 5.25-inch drives and saw quick success. ‘That was MiniScribe’s hour of glory,’ the company’s founder later said. ‘We had our hour of infamy shortly after that.’ In 1989, MiniScribe was investigated for fraud and soon collapsed; a report charged that the company’s practices included fabricated financial reports and ‘shipping bricks and scrap parts disguised as disk drives.’”

Echoes of the companies that Tom Peters celebrated when he went searching for excellence.

Christensen is admired for his towering intellect and also for his courage facing health challenges — one of my children has witnessed both and can vouch for the scholar’s inspiring presence. Unfortunately, his reaction to Lepore’s criticism was less admirable. In a BusinessWeek interview Christensen sounds miffed and entitled:

“I hope you can understand why I am mad that a woman of her stature could perform such a criminal act of dishonesty—at Harvard, of all places.”

At Harvard, of all places. Hmmm…

In another attempt to disprove Jill Lepore’s disproof, a San Francisco- based investment banker wrote a scholarly rearrangement of Disruption epicycles. In his TechCrunch post, the gentleman glows with confidence in his use of the theory to predict venture investment successes and failures:

“Adding all survival and failure predictions together, the total gross accuracy was 84 percent.”

and…

“In each case, the predictions have sustained 99 percent levels of statistical confidence without a flinch.”

Why the venture industry hasn’t embraced the model, and why the individual hasn’t become richer than Warren Buffet as a result of the unflinching accuracy remains a story to be told.

Back to the Disruption sage, he didn’t help his case when, as soon as the iPhone came out, he predicted Apple’s new device was vulnerable to disruption:

“The iPhone is a sustaining technology relative to Nokia. In other words, Apple is leaping ahead on the sustaining curve [by building a better phone]. But the prediction of the theory would be that Apple won’t succeed with the iPhone. They’ve launched an innovation that the existing players in the industry are heavily motivated to beat: It’s not [truly] disruptive. History speaks pretty loudly on that, that the probability of success is going to be limited.”

Not truly disruptive? Five years later, in 2012, Christensen had an opportunity to let “disruptive facts” enter his thinking. But no, he stuck to his contention that Modularity always defeats integration:

“I worry that modularity will do its work on Apple.”

In 2013, Ben Thompson, in his already quoted piece, called Christensen out for sticking to his theory:

“[…] the theory of low-end disruption is fundamentally flawed. And Christensen is going to go 0 for 3.”

Apple will, of course, eventually meet its maker, whether through some far off, prolonged mediocrity, or by a swift, regrettable decision. But such predictions are useless, they’re storytelling – and a bad, facile kind at that. What would be really interesting and courageous would be a detailed scenario of Apple’s failure, complete with a calendar of main steps towards the preordained ending. No more Wrong on the Timing excuses.

A more interesting turn for a man of Christensen’s intellect and reach inside academia would be to become his own Devil’s Advocate. Good lawyers pride themselves in researching their cases so well they could plead either side. Perhaps Clayton Christensen could explain, with his usual authority, how the iPhone defines a new theory of innovation. Or why the Macintosh has prospered and ended up disrupting the PC business by sucking up half of the segment profits. He could then draw comparisons to other premium goods that are happily chosen by consumers, from cars to clothes and…watches.

It’sstill too early to tell if Apple Pay will square the circle and emerge as a payment system that’s more secure, more convenient, and is widely accepted. MCX, a competing solution that faces more challenges than Apple Pay, helps shed light on the problem.

Apple Pay was announced on September 9th with the new iPhone 6, and rolled out on October 20th.

Where it works, it works well. The roster of banks and merchants that accept Apple’s new payment system is impressive, with big names such as Visa, American Express, Bank of America, Macy’s, Walgreens, and Whole Foods.

But it doesn’t work everywhere.

At launch, Apple Pay covered just a corner of the territory blanketed by today’s debit and credit cards. Then we had a real surprise. Within 24 hours of the roll-out, a handful of merchants, notably CVS, Rite-Aid, Target, and Wal-Mart, pulled the plug on Apple Pay. Apparently, these retailers suddenly remembered they had signed an exclusive agreement with Merchant Customer Exchange (MCX), a consortium of merchants that’s developing a competing payment system and mobile app called CurrentC. How a company as well-managed as CVS could have “forgotten” about its contract with MCX, and what the threatened consequences were for this lapse of memory aren’t known…yet.

We could wade through the professions of good faith and sworn allegiance (“We are committed to offering convenient, reliable, and secure payment methods that meet the needs of our customers”, says Rite Aid PR flack Ashley Flower), but perhaps we’re better off just listing MCX’s Friends and Foes.

“At last year’s BAI Retail Delivery conference…I asked Mr. Scott [Lee Scott, former Wal-Mart CEO] why, in the face of so many failed consortia before it, would MCX succeed? He said: ‘I don’t know that it will, and I don’t care. As long as Visa suffers.’”

For Wal-Mart and other big merchants, this 1.51% “donation” cuts too close to the bone, which is why they banded together to form the MCX consortium.

So we know who MCX’s Foes are…but does it have any Friends?

Not really. Counting the MCX merchants themselves as Friends is a bit of a circular argument — no sin there, it’s business — but it doesn’t build a compelling case for the platform.

What about consumers?

On paper, the MCX idea is simple: You download the CurrentC app onto your mobile phone and connect it to a bank account (ABA routing and account number). When it comes time to pay for a purchase, CurrentC displays a QR code that you present to the cashier. The code is scanned, there’s a bit of network chatter, and money is pumped directly out of your bank account.

Set-up details are still a bit sketchy. For example, the CurrentC trial run required the customer’s social security and driver’s license numbers in addition to the bank info. MCX says it doesn’t “expect” to have these additional requirements when CurrentC launches in early 2015, but I’m not sure that it matters. The requirement that the customer supply full banking details and then watch as money is siphoned off without delay is essentially no different from a debit card — but with a middle man inserted into the process. And while debit card use surpassed credit cards as far back as 2007, US shoppers are loathe to leave the warm embrace of their credits cards when it comes to big ticket purchases (average debit card charge in 2012: $37; credit card: $97; see here for yet more estorica).

What does MCX and CurrentC offer that would entice consumers to abandon their credit and debit cards and give merchants direct access to their bank accounts? The consortium can’t offer much in the way of financial incentives, not when the whole point is to remedy Visa’s 1.51% processing fee.

Now let’s look at Apple Pay; first, consumers.

Apple has recognized the strong bond between consumers and their credit cards: The average wallet contains 3.7 cards, with a balance of $7.3K outstanding. Apple Pay doesn’t replace credit cards so much as it makes the relationship more secure and convenient.

Set up is surprisingly error-free — and I’m always expecting bugs (more on that in a future note). The credit card that’s connected to your iTunes account is used by default, all you have to do is launch Passbook and re-enter the CVV number on the back. If you want to use a different credit card account, you take a picture of the card and Passbook verifies it with the issuer. Debit cards also work, although you have to call the bank…as in an actual telephone call. In my case, the bank had a dedicated 877 number. Less than 30 seconds later a confirmation appeared on my device.

Paying is simple: Gently tap the phone on a compatible, NFC-enabled point-of-sale terminal and place a registered finger on the TouchID button; the phone logs the transaction in Passbook and then vibrates pleasantly to confirm.

On the security side, Apple Pay doesn’t store your credit card number, neither on your phone nor on Apple’s servers. Instead, the card is represented by an encrypted token; the most you can ever see are the last four digits of the card — even on an unlocked phone, even when you’re deleting a card from your Passbook.

Simplifying a bit (or a lot), during a transaction this encrypted token is sent through the NFC terminal back to your bank where it’s decrypted. Not even the merchant can see the card.

We can also count the banks and credit card companies as Friends of Apple Pay. For them, nothing much changes. A small fee goes to Apple (0.15%, $1 for every $700). Apple Pay isn’t meant to make money in itself, its goal is to make iDevices more pleasant, more secure.

Banks also like the potential for cutting down on fraud. In 2013, payment card fraud was pegged at $14B globally with half of that in the US. How deeply Apple Pay will cut into this number isn’t known, but the breadth and warmth of Apple Pay adoption by financial institutions speaks for their expectations. Wells Fargo, for example, put up a large billboard over the 101 freeway and promoted the service on social media:

What about merchants? This is a mixed bag; some seem to be fully on board, although, as ever, we mustn’t judge by what they say for the flackery on the left is just as disingenuous as the flackery on the right. Regard the declaration from pro-Apple Pay Walgreens: “Incorporating the latest mobile technology into our business is another way we are offering ultimate convenience for our customers.” Sound familiar?

Others, such as Wal-Mart are resolute Foes. Of the fence sitters, time will tell if they’ll jump into the Apple Pay camp or desert it. It’s still very early.

Questions remain regarding “loyalty” programs, a cynical word if there ever was one when considering the roach motels of frequent flyer miles. A quick look at in-app payments provides a possible answer.

One such example, no surprise, is Apple’s own App Store app where you can pay with Apple Pay after scanning an accessory’s barcode. The app triggers a confirmation email that shows that the merchant, Apple, is aware of the transaction. Other merchants can, and will, build their own apps, but there’s still the question of how a loyalty program will work for point-of-sale transactions where merchants can’t see your data.

In a clumsily worded comparison, MCX CEO Dekkers Davidson tries to imply that his company’s exclusivity requirement is much like AT&T’s arrangement with Apple in the early days of the iPhone, and arrangement that wasn’t permanent and that worked out well for both parties. In the meantime, one can visualize Apple engaging in an encircling action, patiently adding partners and features quarter after quarter.

A recent Bain & Co survey paints Europe’s digital future as squeezed between the explosive demand of emerging countries and the dominance of US-based internet giants.

The “Next Billion”, a phrase coined and propagated by the Quartz team, refers to the explosive internet growth in emerging countries — almost entirely fueled by mobile usage. The new phrase got its own conference (last week in NYC, next May 19th in London) and a dedicated section on the Quartz site.

For Europe, the Next Billion will be hard: Last week, the global consulting firm Bain & Co published a survey that exposes what’s at stake. The report, titled Generation #hashtag (pdf here), was commissioned by The Forum d’Avignon, a yearly gathering of intellectuals and business people that explores cultural changes; the survey was conducted by Bain’s staff in Paris and Los Angeles over 7,000 respondents in 10 countries with the following internet status:

The next period is going to be dominated by digital natives, i.e. audiences that won’t have even known other forms of media vectors (video, communication, news or entertainment). In emerging countries, powerful forces are now in motion (emphasis mine):

Across the BRICS countries, the percentage of consumers 25 and younger—who are, on average, 40% more prevalent than in developed economies, according to Euromonitor—suggests the rapid rise of Generation #hashtag in emerging markets. (…) Over the past year, smartphone ownership rose significantly in emerging markets: from 45% to 50% in China, 14% to 21% in India, 45% to 54% in Brazil and 45% to 63% in Russia.

The chart below shows two important elements. One is the importance of the Generation #hashtag: the digital population comprised with digital “migrants” and “natives”. The second and even more interesting is the demographic distribution which, for emerging countries, clearly states the potential:

These demographics show two major gaps: disposable income and networking infrastructure. Taking the long view, the first one could be overcome by strong growth in key areas. For the second, in many countries of Asia or Sub-Saharan Africa, land lines equipment is staying flat — and sometimes decreasing — while cellular networks are growing like weeds. A couple of weeks ago, Benedict Evans, from the Andreessen Horowitz venture firm, released his Mobile is Eating the World slide deck – from which I extracted these two charts :

These figures might prove to be conservative as heated competition for the Next Billion has already started between tech giants. Google’s Project Loon, Facebook’s future drones network, both aimed at delivering broadband internet in developing countries, are soon to be joined by Elon Musk’s plan to deploy 700 micro-satellites at a cost of $1bn to serve the same purpose. If we factor in Mark Zuckerberg’s idea to get a 100x improvement on internet delivery (10x reduction in cost of serving data multiplied by 10x improvement in compression and caching technologies), we can project the internet’s global availability challenge as solved within the next five years or so.

In this picture, Europe faces a huge industrial problem. The players who will benefit from the exploding demand in emerging markets are everywhere but in Europe, except maybe for Nokia Networks (not the handset division, now owned by Microsoft, but the remaining independent infrastructure business), and Sweden-based telecom maker Ericsson. Other are mostly Chinese, Taiwanese and Korean (Samsung and many others), they cover the entire field from networking infrastructure to mobile terminals. Symmetrically, most of the engineering brainpower and very large investment capabilities are concentrated in the United States with immensely rich companies such as Google or Facebook (or Musk’s Space X) thay are focused on capturing this Next Billion.

Europe’s options are limited. With no common language, no political leadership, encumbered by a gigantic bureaucracy, scattered and uneven access to capital (compared to the Keiretsu-like American venture capital system), Europe could choke, squeezed between Asian hardware markers and US-based software giants. Unfortunately, instead of organizing itself to favor the emergence of tech leaders — through decisive education programs and smarter immigration policies for instance — Europe’s main contribution to technologies has been the creation of tax havens. A textbook example of a missed opportunity.

Payment systems and user behaviors have evolved over the past three decades. In this first of a two-part Monday Note, I offer a look at the obstacles and developments that preceded the Apple Pay launch.

When I landed in Cupertino in 1985, I was shocked, shocked to find that so much gambling was going on in here. But it wasn’t the Rick’s Café Américain kind of gambling, it was the just-as-chancy use of plastic: Colleagues would heedlessly offer their credit card numbers to merchants over the phone; serious, disciplined executives would hand their AmEx Platinums to their assistants without a second thought.

This insouciant way of doing business was unheard of in my Gallic homeland. The French (and most Europeans) think that trust is something that must be earned, that it has a value that is debased when it’s handed out too freely. They think an American’s trusting optimism is naïve, even infantile.

After I got over my shock, I came to see that my new countrymates weren’t such greenhorns. They understood that if you want to lubricate the wheels of commerce, you have to risk an occasional loss, that the rare, easily-remedied abuses are more than compensated for by a vibrant business. It wasn’t long before I, too, was asking my assistant to run to the store with my Visa to make last-minute purchases before a trip.

The respective attitudes towards trust point out a profound cultural difference between my two countries. But I also noticed other differences that made my new environment feel a little antiquated.

For example, direct deposit and direct deduction weren’t nearly as prevalent in America as in France. In Cupertino, I received a direct deposit paycheck, but checks to cover expenses were still “cut”, and I had to write checks for utilities and taxes and drop them in the mailbox.

Back in Paris, everything had been directly wired into and out of my bank account. Utilities were automatically deducted ten days after the bill was sent, as mandated by law (the delay allowed for protests and stop-payments if warranted). Paying taxes was ingeniously simple: Every month through October, a tenth of last year’s total tax was deducted from your bank account. In November and December, you got a reprieve for Holiday spending fun (or, if your income had gone up, additional tax payments to Uncle François — Mitterrand at the time, not Hollande).

Like a true Frenchman, I once mocked these “primitive” American ways in a conversation with a Bank of America exec in California. A true Californian, she smiled, treated me to a well-rehearsed Feel-Felt-Found comeback, and then, dropping the professional mask, she told me that the distrust of electronic commerce that so astonished me here in Silicon Valley (of all places), it was nothing compared to Florida where it’s common for retirees to cash their Social Security checks at the bank, count the physical banknotes and coins, and then deposit the money into their accounts.

Perhaps this was the heart of the “Trust Gap” between Europe and the US: Europeans have no problem trusting electronic commerce as long as it doesn’t involve people; Americans trust people, not machines.

My fascination with electronic payment modes preceded my new life in Silicon Valley. In 1981, shortly after starting Apple France, I met Roland Moreno, the colorful Apple ][ hardware and software developer who invented the carte à puce (literally “chip card”, but better known as a “smart card”) that’s found in a growing number of credit cards, and in mobile phones where it’s used as a Subscriber Identity Module (SIM).

The key to Moreno’s device was that it could securely store a small amount of information, hence its applicability to payment cards and mobile phones.

I carried memories of my conversations with Moreno with me to Cupertino. In 1986, we briefly considered adding a smart card reader to the new ADB Mac keyboard, but nothing came of it. A decade later, Apple made a feeble effort to promote the smart card for medical applications such as a patient ID, but nothing came of that, either.

The results of the credit cards industry’s foray into smart card technology were just as tepid. In 2002, American Express introduced its Blue smart card in the US with little success:

“But even if you have Blue (and Blue accounts for nearly 10% of AmEx’s 50 million cards), you may still have a question: What the hell does that chip (and smart cards in general) do?

The answer: Mostly, nothing. So few stores have smart-card readers that Blue relies on its magnetic strip for routine charges.”

In the meantime, the secure smart chip found its way into a number of payment cards in Europe, thus broadening the Trust Gap between the Old and New Worlds, and heightening Roland’s virtuous and vehement indignation.

(Moreno, who passed away in 2012, was a true polymath; he was an author, gourmand, inventor of curious musical instruments, and, I add without judgment, an ardent connoisseur of a wide range of earthly delights).

Next came the “Chip and PIN” model. Despite its better security — the customer had to enter a PIN after the smart card was recognized — Chip and PIN never made it to the US, not only because there were no terminals into which the customers could type their PINs (let alone that could read the smart cards in the first place), but, just as important, because there was a reluctance on the part of the credit card companies to disturb ingrained customer behavior.

It appeared that smart cards in the US were destined to butt up against these two insurmountable obstacles: The need for a new infrastructure of payment terminals and a skepticism that American customers would change their ingrained behavior to accept them.

In 2003, I made a bad investment in the payment system field on behalf of the venture company I had just joined. The entrepreneur that came to us had extensive “domain knowledge” and proposed an elegant way to jump over both the infrastructure and the customer behavior obstacles by foregoing the smart card altogether. Instead, he would secure the credit card’s magnetic stripe.

With its idea of creating “iTunes for the press”, Blendle rattles the news industry’s cage. In spite of blessings from The New York Times and Axel Springer, the shiny new thing might just be a mirage.

Last week, two young Dutch people came up with a string of magic words: “iTunes for the press”, “New York Times”, and “Axel Springer”. The founders of Blendle, Alexander Klöpping and Marten Blankesteijn, were promising a miracle cure to a sick industry: a global system for the distribution of editorial products (the iTunes reference), backed by the gold standard of digital journalism (The New York Times), and also supported by the European leader of the rebellion against Google (Axel Springer). Great casting, great promises. Like handing out Zmapp doses in an Ebola ward.

Blendle’s principle is to unbundle publications and sell stories by the slice, for €0.10 to €0.30 ($0.13 to $0.38) each. (Actually, on Blendle.nl, some articles shoot up to €0.89 or $1.11, publisher’s choice). Basically, you register and get €2.50 credit, browse a well-designed kiosk (or an equally good app), and cherry-pick what you want. Blendle added unique features such as the possibility of a refund for a story you don’t like; its founders saying its a mandatory feature for any e-commerce business (“returns” account for around 4% of transactions). Launched in April on the Dutch market, the service is a success: 135,000 subscribers so far. According to the founders, 20,000 to 30,000 are added each month. Not bad for a 16-million-people country that enjoys an internet penetration of 94%.

I see many reasons to cast strong doubt about Blendle’s sustainability as a global business, and I see no benefit for digital media. The idea of unbundling news content is an old one. I recall a 1995 conversation with Nicholas Negroponte, at the time head of MIT’s MediaLab. Back then, he envisioned exactly what Klöpping and Blankesteijn are trying to implement now (both were 8-year old at the time.)

Negroponte’s vision never materialized and there are many reasons for this.

The first one is the hyper-abundance of free content, especially in English, a notion completely overlooked by Blendle’s advocates. Years ago, I used to tell my colleagues at Schibsted ASA in Norway that their country was so small (4.5m inhabitants) and their market position so dominant, with the huge traffic machines of their large print and digital publications, that if they put out online text in Pashto, it would still drive serious audience numbers. (Schibsted became a $2.2bn global player thanks to a strong diversification strategy served by a remarkable execution.) In the case of Blendle, the Dutch language serves as a cordon sanitaire, a kind of firewall mostly shielding publishers from the interference of free contents. In other words, it makes a relative sense for De Volkskrant, NRC, or De Telegraaf to join Blendle since they are already well-positioned on a small market.

This cannot work for the English language and its 1.2 billion speakers spread across the world, including 350 million native speakers. Pick any subject in the news cycle — say, Blendle precisely. In a few clicks, I will get a 800 words story from the Economist, a 900 words one for the Guardian, another 700 words article for BusinessWeek and a 1600 words piece from TechCrunch. And I’m not mentioning the… 24,400 other “Blendle” references that pop-up in Google News. In this list, only The Economist intends to join the Dutch service. Hence my question: Would you pay even 20 cents to get the Economist story while a profusion of good coverage is available just one click away for free? Me neither.

Second reason for discounting the Blendle model: News media have always built their business on a “cross-subsidy” system. Quite often, high audience stories — that don’t even cost much to produce (sports as an example), support low audience but costly reporting such as foreign coverage or “enterprise journalism” (that is when editor decide to assign large resources to go after a worthwhile subject –needless to say, this concept has become an endangered species.) Granted, a media powerhouse such as The New York Times still produces unique contents that justify paying for it (about the recent NYT economics, read Ken Doctor’s piece on NiemanLab). But I doubt that a buy-by-the-slice Blendle revenue will contribute for more than a fraction of a percentage point to the $200m a year cost of operating the Grey Lady’s 1300-staff newsroom.

Third reason: Lack of serendipity. A well-edited media — print or digital — is a clever assemblage of diversified subjects aimed a triggering readers’ curiosity for topics outside their usual range of interests. That’s not likely to work in Blendle’s model, because it relies on three entry points — Trending, Realtime and StaffPicks — that actually transfer the classical user-induced serendipity to the editors of the service. I doubt lots of media are actually willing to give up the opportunity to capture reader’s attention on the widest possible spectrum by leaving the reins in Blendle’s hands.

Fourth reason: Advertising loss. While digital ads is mostly a failure for the news industry, separating ads from content sounds like a weird idea. Today, publishers are working hard to get a more granular profile of their audiences in order to serve them with more relevant contents, tailored ads, and ancillary products. Content dissemination won’t help this process.

Why then do the NYT and Springer, both strongly attached to the value of their editorial production, jump aboard this boat? For the Times, it might have to do with the idea of diversifying revenue streams in every possible ways by extracting more dollars for its vast supply of occasional readers. Axel Springer’s motive is different. The German giant is literally obsessed with undermining Google’s de facto position in the news sector. Hence the bets it takes here and there, buying the French search engine Qwant or taking over the babbling Open Internet Project. Both choices are far from promising high potential, scalable moves.

Publishers who are tempted by the Blendle model also choose to ignore the damage suffered by the music industry. Once the user was given the opportunity to buy each song separately (for a dollar, not 20 cents), the ARPU quickly collapsed, and there was no turning back. Also, at the time, the paid-for music was not competing against free content — except for piracy — in the way that today’s paid contents have to face a profusion of free editorial, sometimes excellent.

And finally, let’s not forget that the original “iTunes model” is not as shiny as it used to be. For Apple, its ARPU went from $4.3 per user for Q1 2012, to $1.9 per user for Q1 2014, a 56% drop. The reason: Users are massively switching to the flat-fee/no ownership model of music streaming (hence Apple bet on Beats.)
Even before it reached news media, the iconic iTunes system was already seriously damaged.

Trading one’s privacy for the benefit of others isn’t an easy decision. Tim Cook just made such a swap, and the reverberations are beginning to be heard.

I’m happy and relieved that Tim Cook decided to “come out”, to renounce his cherished privacy and speak of his sexual orientation in plain terms rather than veiled, contorted misdirections. The unsaid is toxic.

If you haven’t done so already, please take the time to read Tim’s I’m Proud to Be Gay Businessweek editorial. Soberly written and discreetly moving, the piece concludes with:

It’s an admirable cause…but why should I care? Why does this 70-year old French-born American, a happily married-up father of three adult and inexplicably civilized children, care that Cook’s sexuality is now part of the public record?

First, I like and respect Cook for what he does, how he does it, and the way he handles his critics. For the past three years he’s been bombarded by questions about Apple’s slowing growth and the absent Next Big Thing, he’s been criticized for both hastening and impeding the inevitable commoditization of All Things Apple, he’s been called a liar by the NYT. Above all, he’s had to suffer the hidden — and occasionally blatant — accusation: You’re no Steve Jobs.

Throughout it all, Cook has displayed a preternatural calm in refusing to take the bait. In a previous Monday Note, I attributed his ability to deflect the cruel jibes to his having grown up “different” in Alabama. In his editorial, Cook confirms as much:

“It’s been tough and uncomfortable at times… [but] it’s also given me the skin of a rhinoceros, which comes in handy when you’re the CEO of Apple.”

Second, I’ve seen the ravages of homophobia at close range. A salient and personal example is the young gay architect of our first Palo Alto house. He quickly sensed he could be open with us, and would tease my wife Brigitte by showing her pictures of a glorious group of young bucks on vacation in Greece, adding, “What a loss for females”. But he also told us of his shame when he became aware of his desires in his adolescence, that he kneeled down every night to pray that his god would have mercy and make him “normal”. His parents rejected him and refused to keep in touch, even after the HIV virus made him perilously sick.

One morning when we were driving to his place in San Francisco to deliver a painting Brigitte had made for him, his partner called and told us not to come. Our friend had just passed away, still unaccepted by his parents.

Another personal example. A local therapist, a gay Buddhist, told me he couldn’t work as an M.D. in his native Caracas because the oppressive culture wouldn’t allow a gay man to so much as touch another man — even as a doctor. When he decided to tell his parents he was gay, he had to take them to a California mountain and mellow them with a certain herb before they would hear him out, and even then they didn’t entirely embrace his “choice” of sexuality.

Years of conversation with the fellow — who’s exactly my age — in a setting that facilitates honesty have brought empathy and insights that aren’t prevalent or even encouraged in the Parisian culture I come from, even in the supposedly liberated Left Bank that has been the home of lionized gay men such as Yves Saint-Laurent and Karl Lagerfeld. (I recommend Alicia Drake’s The Beautiful Fall. Lagerfeld, Saint Laurent, and Glorious Excess in 1970s Paris, a well-document and beautifully written parallel life history.)

This leads me to my third point, brought up by my wife. Gays have always been accepted in creative milieus. In many fields — fashion, certainly, but even in high tech — it’s almost expected that a “designer” is homosexual. Despite counter examples such as Christian Lacroix, or our own Sir Jony, the stereotype endures.

According to the stereotype, it’s okay for “artistes” (I’ve learned the proper dismissive pronunciation, an elongated ‘eee’ after the first ’t’) to be unconventional, but serious business people must be straight. When I landed in Cupertino in 1985, I became acquainted with the creative <=> gay knee jerk. True-blue business people who didn’t like Apple took to calling us “fags” because of our “creative excesses” and disregard of the establishment.

What Brigitte likes most about Cook’s coming out is that it portends a liberation of the Creative Ghetto. Cook isn’t just outing himself has a gay executive; he’s declaring that being gay — or “creatively excessive”, or unconventional — is fully appropriate at the very top of American business. It helps, she concludes, that Apple’s CEO has made his statement from a position of strength, at a time when the company’s fortunes have reached a new peak and his leadership is more fully recognized than ever.

The ripples now start. Perhaps they’ll bring retroactive comfort to many execs such as former BP CEO John Browne who, in 2007, left his job in fear of a revelation about his lifestyle – and an affirmation to myriads of “different” people at the bottom of the pyramid.

Tim Cook brings hope of a more accepting world – both inside and outside of business. For this he must be happy, and so am I.

Plummeting iPad sales rekindle fantasies of a hybrid device, a version that adopts PC attributes, something like a better execution of the Microsoft Surface Pro concept. Or not.

For a company that has gained a well-deserved reputation for its genre-shifting — even genre-creating — devices, it might seem odd that these devices evolve relatively slowly, almost reluctantly, after they’ve been introduced.

It took five years for the iPhone to grow from its original 3.5” in 2007, to a doubled 326 ppi on the same screen size for the June 2010 iPhone 4, to a 5” screen for the 2012 iPhone 5.

In the meantime, Samsung’s 5.3” Galaxy Note, released in 2011, was quickly followed by a 5.5” phablet version. Not to be outdone, Sony’s 2013 Xperia Z Ultra reached 6.4” (160 mm). And nothing could match the growth spurt of the long-forgotten (and discontinued) Dell Streak: from 5” in 2010 to 7” a year later.

Moreover, Apple’s leadership has a reputation — again, well-deserved — of being dismissive of the notion that their inspired creations need to evolve. While dealing with the iPhone 4 antenna fracas at a specially convened press event in 2010, a feisty Steve Jobs took the opportunity to ridicule Apple’s Brobdingnagian smarphone rivals, calling them “Hummers”, predicting that no one will buy a phone so big “you can’t get your hand around it”.

A smaller iPad? Nah, you’d have to shave your fingertips. Quoting the Grand Master in October 2010 [emphasis mine]:

“While one could increase the resolution to make up some of the difference, it is meaningless unless your tablet also includes sandpaper, so that the user can sand down their fingers to around one-quarter of their present size. Apple has done expensive user testing on touch interfaces over many years, and we really understand this stuff.

There are clear limits of how close you can place physical elements on a touch screen, before users cannot reliably tap, flick or pinch them. This is one of the key reasons we think the 10-inch screen size is the minimum size required to create great tablet apps.”

For his part, Tim Cook has repeatedly used the “toaster-fridge” metaphor to dismiss the idea that the iPad needs a keyboard… and to diss hybrid tablet-PC devices such as Microsoft’s Surface Pro, starting with an April 2012 Earnings Call [emphasis and stitching mine]:

“You can converge a toaster and a refrigerator, but those aren’t going to be pleasing to the user. […] We are not going to that party, but others might from a defensive point of view.”

Recently, however, Apple management has adopted a more nuanced position. In a May 2013 AllThings D interview, Tim Cook cautiously danced around the iPhone screen size topic — although he didn’t waste the opportunity to throw a barb at Samsung [insert and emphasis mine]:

“We haven’t [done a bigger screen] so far, that doesn’t shut off the future. It takes a lot of really detailed work to do a phone right when you do the hardware, the software and services around it. We’ve chosen to put our energy in getting those right and have made the choices in order to do that and we haven’t become defocused working multiple lines.”

“[Jobs] would flip on something so fast that you would forget that he was the one taking the 180 degree polar [opposite] position the day before. I saw it daily. This is a gift, because things do change, and it takes courage to change. It takes courage to say, ‘I was wrong.’ I think he had that.”

That brings us to the future of the iPad. In the same interview (in 2012) Cook expressed high hopes for Apple’s tablet:

“The tablet market is going to be huge… As the ecosystem gets better and better and we continue to double down on making great products, I think the limit here is nowhere in sight.”

Less than two years after the sky-is-the-limit pronouncement, iPad unit sales started to head South and have now plummeted for three quarters in a row (- 2,3%, – 9% and – 13% for the latest period). This isn’t to say that the iPad is losing ground to its competitors, unless you include $50 models. Microsoft just claimed $903M in Surface Pro revenue for the quarter ended last September, which, at $1K per hybrid, would be .9M units, or double that number if the company only sold its $499 year-old model. For reference, 12.3M iPads were sold in the same period (I don’t know any company, other than Apple, that discloses its tablet unit volume).

I find Federighi’s remark a bit facile. Yes, touching the screen makes much more ergonomic sense for a tablet than for a laptop, but in view of the turnabouts discussed above, I don’t quite know what to make of the honestly part.

Frederigh may be entombed in the OS X and iOS software caves, but can he honestly ignore the beautiful Apple Wireless Keyboard proposed as an iPad accessory, or the many Logitech, Incase, and Belkin keyboards offered in the company’s on-line store? (Amazon ranks such keyboards between #20 and #30 in their bestsellers lists.) Is he suborning others to commit the crime of toaster-fridging?

In any case, the iPad + keyboard combo is an incomplete solution. It’s not that the device suffers from a lack of apps. Despite its poor curation, the App Store’s 675,000 iPad apps offer productivity, entertainment, education, graphic composition and editing, music creation, story-telling, and many other tools. As Father Horace (Dediu) likes to put it, the iPad can be “hired to do interesting jobs”.

No, what’s missing is that the iOS user interface building blocks are not keyboard-friendly. And when you start to list what needs to be done, such as adding a cursor, the iPad hybrid looks more and more like a Mac…but a Mac with smaller margins. The 128GB iPad plus an Apple Keyboard rings up at $131 less than a 11”, 128GB MacBook Air. (As an added benefit, perhaps the Apple toaster-fridge would come bundled with Gene Munster’s repeatedly predicted TV Set.)

On to better science fiction.

Let’s imagine what might happen next quarter when Intel finally ships the long-promised Broadwell processors. The new chips’ primary selling point is reduced power consumption. The Broadwell probably won’t dislodge ARMSoCs from smartphones, but a reduced appetite for electricity could enable a smaller, slimmer, lighter MacBook Air 2, with or without a double (linear) density Retina display.

Now consider last quarter’s iPad and Mac numbers, compared to the previous year:

You’re in Apple’s driver seat: Do you try to make the iPad feel more like a Mac despite the risks on many levels (internal engineering, app developers, UI issues), or do you let nature to take its course and let the segment of more demanding users gravitate to the Mac, cannibalizing iPad sales as a result? Put another way, are you willing to risk the satisfaction of users who enjoy “pure tablet” simplicity in order to win over customers who will naturally choose a nimbler Mac?

Google’s recent Search Box feature is but one example of the internet giant’s propensity to use weird ideas to inflict damage upon itself. This sheds light on two serious dangers for Google: Its growing disconnection from the real world and its communication shortcomings.

At first, the improved Google search box discreetly introduced on September 5 sounded like a terrific idea: you enter the name of a retailer — say Target, Amazon — and, within Google’s search result page, shows up another, dedicated search box in which you can search inside the retailer inventory. Weirdly enough, this new feature was not mentioned in a press release, but just in a casual Google Webmaster Central Blog post aimed at the tech in-crowd.

Evidently, it was also supposed to be a serious commercial enhancer for the search engine. Here is what it looked like as recently as yesterday:

Google wins on both ends: it keeps users on its own site (a good way to bypass the Amazon gravity well) while, in passing, cashing on ad modules purchased, in this case, both by Amazon.fr itself bidding for the keyword “perceuse” (drill) on Google.fr, and also by Amazon’s competitors offering the same appliance (and whose bids were lower.)

In due fairness, the Google Webmaster Blog explains how to bypass the second stage and how to make a search that lands directly to the site, Amazon.fr in our example. Many US e-commerce sites did so. Why Amazon didn’t is still unclear.

Needless to say, this new feature triggered outrage from many e-commerce sites, especially in Europe. (I captured these screenshots on Google.fr because no ads showed up for US retailers, most likely because I’m browsing form Paris).

For Google’s opponents, it was a welcome ammunition. Immediately, the Open Internet Project summoned a press conference (last Thursday Oct. 23), inviting journalists seen as supportive of their cause. In a previous Monday Note (see Google and the European media: Back to the Ice Age), I told the story of this advocacy group, mostly controlled by the German publishing giant Axel Springer AG, and the French media group Lagardère Active. The latter’s CEO, Denis Olivennes is well-know for his deft political maneuvers, much less so for his business acumen as he missed scores of digital trains in his long career in retail (he headed French retailer Fnac), and in the media business.

Realizing its mistake, Google quickly pulled back, removing the search box on several retailers’ sites, and announcing (though unofficially) that it was working on an opt-out system.

This incident is the perfect illustration of two major Google liabilities.

One: Google’s disconnect from the outside world keeps growing. More than ever, it looks like an insulated community, nurturing its own vision of the digital world, with less and less concern for its users who also happen to be its customers. It looks like Google lives in its own space-time (which is not completely a figure of speech since the company maintains its own set of atomic clocks to synchronize its data centers across the world independently from official time sources).

You can actually feel it when hanging around its vast campus, where large luxury buses coming from San Francisco pour out scores of young people, mostly male (70%) mostly white (61%), produced by the same set of top universities (in that order: Stanford, UC Berkeley, Carnegie Mellon, MIT, UCLA…). They are pampered in the best possible way, with free food, on location dental care, etc. They see the world through the mirrored glass of their office, their computer screen and the reams of data that constitute their daily reality.

Google is a brainy but also messy company where the left hemisphere ignores what the other one does. Since the right one (the engineers) is particularly creative and productive, the left brain suffers a lot. In this recent case, a group of techies working at the huge search division (several thousands people) came up with this idea of an improved search box. Higher up, near the top, someone green-lighted the idea that went live early September. Many people from the left hemisphere — communication, legal, public affairs — might have been kept in the dark, not even willfully, by the engineering team, but simply by natural cockiness (or naiveté). However, I also suspect the business side of the company was in the loop (“Google” and “candor” make a solid oxymoron).

Two: Google has a chronic communication problem. The digital ecosystem is known for quickly testing and learning (as opposed to legacy media that are more into staying and sinking). In practical terms, they fire first and reflect afterwards. And sometimes retract. In the search box incident, the right attitude would have been to put up a communiqué saying basically, “Our genuine priority was to improve the user experience [the mandatory BS], but we found out that many e-retailers strongly disliked this new feature. As a result, we took the following steps, blablabla.” Instead, Google did nothing of the sort, only getting its engineering staff to quietly remove the offending search box.

There is a pattern to Google’s inability to properly communicate. You almost discover by accident that these people are doing stunning things in many fields. When the company is questioned, it almost never responds by providing solid data to make its point — that’s simply unbelievable from a company that is so obsessed with its reliance to hard facts. Recall Google’s internal adoption of W. Edwards Deming’s motto: In god we trust, all others bring data.

In parallel, the company practices access journalism, picking up the writer of its choosing, giving him/er a heads-up for a specific subject hoping for a good story. Here are two examples from Wired and The Atlantic.

These long-read “exclusive” and timely features were reported respectively on location from New Zealand and Australia. They are actually great and balanced pieces since both Wired’s Steven Levy and Atlantic’s Alex Madrigal are fine journalists.

While it never miss a opportunity to mention its vulnerability, Google is better than anyone else at nurturing it. Like Mikhail Gorbachev used to say about the crumbling USSR: “The steering is not connected to the wheels”. We all know what happened.

The news media sector has become heavily dependent on traffic from Facebook and Google. A reliance now dangerously close to addiction. Maybe it’s time to refocus on direct access.

Digital publishers pride themselves on their ability to funnel traffic from search and social, namely Google and Facebook (we’ll see that Twitter, contrary to its large public image, is in fact a minuscule traffic source.) In ly business, we hunt for the best Search Engine Optimization specialists, social strategists, community managers to expand the reach of our precious journalistic material; we train and retrain newsroom staff; we equip them with the best tools for analytics and A/B testing to see what headlines best fit the web’s volatile mood… And yet, when a competing story gets a better Google News score, the digital marketing staff gets a stern remark from the news floor. We also compare ourselves with the super giants of the internet whose traffic numbers coming from social reach double digit percentages. In short, we do our best to tap into the social and search reservoir of readers.

Consequences vary. Many great news brands today see their direct traffic — that is readers accessing deliberately the URL of the site — fall well below 50%. And the younger the media company (pure players, high-performing click machines such as BuzzFeed), the lower the proportion of direct access is – to the benefit of Facebook and Google for the most part. (As I write this, another window on my screen shows the internal report of a pure player news site: In August it only collected 11% in direct access, vs. 19% from Google and 24% from Facebook — and I’m told it wants to beef up it’s Facebook pipeline.)

Fact is, the two internet giants now control most of the news traffic. Even better, they collect on both ends of the system.

Consider BuzzFeed. In this story from Marketing Land, BuzzFeed CEO Jonah Peretti claims to get 75% of its traffic from social and to not paying much attention to Google anymore. According to last Summer ComScore data, a typical BuzzFeed viewer reads on average 2.3 articles and spends slightly more than 3 minutes per visit. And when she leaves BuzzFeed, she goes back to the social nest (or to Google-controlled sites) roughly in the same proportion. As for direct access, it amounts to only 6% and Twitter’s traffic is almost no existent (less than 1%). It clearly appears that Twitter’s position as a significant traffic contributor is vastly overstated: In real terms, it’s a tiny dot in the readers’ pool. None of this is accidental. BF has built a tremendous social/traffic machine that is at the core of its business.

Whether it is 75% of traffic coming from social for BuzzFeed or 30% to 40% for Mashable or others of the same kind, the growing reliance to social and search raises several questions.

The first concerns the intrinsic valuation of a media so dependent on a single distribution provider. After all, Google has a proven record of altering its search algorithm without warning. (In due fairness, most modifications are aimed at content farms and others who try to game Google’s search mechanism.) As for Facebook, Mark Zuckerberg is unpredictable, he’s also known to do what he wants with his company, thanks to an absolute control on its Board of Directors (read this Quartz story).

None of the above is especially encouraging. Which company in the world wouldn’t be seen as fragile when depending so much on a small set of uncontrollable distributors?

The second question lies in the value of the incoming traffic. Roughly speaking, for a news, value-added type media, the number of page views by source goes like this:
Direct Access : 5 to 6 page views
Google Search: 2 to 3
Emailing: ~2
Google News: ~1
Social: ~1
These figures show how good you have to be in collecting readers from social sources to generate the same advertising ARPU as from a loyal reader coming to your brand because she likes it. Actually, you have to be at least six times better. And the situation is much, much worse if your business model relies a lot on subscriptions (for which social doesn’t bring much transformation when compared, for instance, to highly targeted emails.)

To be sure, I do not advocate we should altogether dump social media or search. Both are essential to attract new readers and expand a news brand’s footprint, to build the personal brand of writers and contributors. But when it comes to the true value of a visit, it’s a completely different story. And if we consider that the value of a single reader must be spread over several types of products and services (see my previous column Diversify or Die) then, the direct reader’s value becomes even more critical.

Taken to the extreme, some medias are doing quite well by relying solely on direct access. Netflix, for instance, entirely built its audience through its unique recommendation engine. Its size and scope are staggering. No less than 300 people are assigned to analyze, understand, and serve the preferences of the network’s 50 million subscribers (read Alex Madrigal’s excellent piece published in January in The Atlantic). Netflix’s data chief Neil Hunt, in this keynote of RecSys conference (go to time code 55:30), sums up his ambition by saying his challenge is “to create 50 million different channels“. In order to do so, he manages a €150m a year data unit. Hunt and his team concentrate their efforts on optimizing the 150 million choices Netflix offers every day to its viewers. He said that if only 10% of those choices end up better than they might have been without its recommendation system, and if just 1% of those choices are good enough to prevent the cancellation of a subscription, such efforts are worth €500m a year for the company (out of a $4.3bn revenue and a $228m operating income in 2013). While Netflix operates in a totally different area from news, such achievement is worth meditating upon.

Maybe it’s time to inject “direct” focus into the obligatory social obsession.