According to Airbus and CERA, although cumulative growth in air traffic has totaled roughly 45% since 2000, fuel consumed by the global fleet of aircraft is up less than 5% over the same period, as airlines have accelerated aircraft parking/retirements of older airplane models and ordered newer more efficient replacements at a record pace. Greater efficiency (i.e. load factors) and fleet renewal are at the heart of an airline’s competitiveness in a world where fuel is now an airline’s largest single operating cost; this became the case mid-last-decade for the first time since the late 1970’s US deregulation.

The chart of revenues below says it all. The beginning revenue of Blockbuster was $6 billion, while the ending revenue of Netflix is $2.2 billion. When the inefficiencies of having retail locations, moving physical inventory, and maintaining overhead/staff are cut out of the ecosystem, far less revenue is needed to support the whole business.

If data centers are the brains of an information company, then Google is one of the brainiest there is. Though always evolving, it is, fundamentally, in the business of knowing everything. Here are some of the ways it stays sharp.

For tackling massive amounts of data, the main weapon in Google’s arsenal is MapReduce, a system developed by the company itself. Whereas other frameworks require a thoroughly tagged and rigorously organized database, MapReduce breaks the process down into simple steps, allowing it to deal with any type of data, which it distributes across a legion of machines.

Looking at MapReduce in 2008, Wired imagined the task of determining word frequency in Google Books. As its name would suggest, the MapReduce magic comes from two main steps: mapping and reducing.

The first of these, the mapping, is where MapReduce is unique. A master computer evaluates the request and then divvies it up into smaller, more manageable “sub-problems,” which are assigned to other computers. These sub-problems, in turn, may be divided up even further, depending on the complexity of the data set. In our example, the entirety of Google Books would be split, say, by author (but more likely by the order in which they were scanned, or something like that) and distributed to the worker computers.

Then the data is saved. To maximize efficiency, it remains on the worker computers’ local hard drives, as opposed to being sent, the whole petabyte-scale mess of it, back to some central location. Then comes the second central step: reduction. Other worker machines are assigned specifically to the task of grabbing the data from the computers that crunched it and paring it down to a format suitable for solving the problem at hand. In the Google Books example, this second set of machines would reduce and compile the processed data into lists of individual words and the frequency with which they appeared across Google’s digital library.

The finished product of the MapReduce system is, as Wired says, a “data set about your data,” one that has been crafted specifically to answer the initial question. In this case, the new data set would let you query any word and see how often it appeared in Google Books.

MapReduce is one way in which Google manipulates its massive amounts of data, sorting and resorting it into different sets that reveal new meanings and have unique uses. But another Herculean task Google faces is dealing with data that’s not already on its machines. It’s one of the most daunting data sets of all: the internet.

Last month, Wired got a rare look at the “algorithm that rules the web,” and the gist of it is that there is no single, set algorithm. Rather, Google rules the internet by constantly refining its search technologies, charting new territories like social media and refining the ones in which users tread most often with personalized searches.

But of course it’s not just about matching the terms people search for to the web sites that contain them. Amit Singhal, a Google Search guru, explains, “you are not matching words; you are actually trying to match meaning.”

Words are a finite data set. And you don’t need an entire data center to store them—a dictionary does just fine. But meaning is perhaps the most profound data set humanity has ever produced, and it’s one we’re charged with managing every day. Our own mental MapReduce probes for intent and scans for context, informing how we respond to the world around us.

In a sense, Google’s memory may be better than any one individual’s, and complex frameworks like MapReduce ensure that it will only continue to outpace us in that respect. But in terms of the capacity to process meaning, in all of its nuance, any one person could outperform all the machines in the Googleplex. For now, anyway. [Wired, Wikipedia, and Wired]

While RTB will make ad exchanges even more efficient, it may not be that necessary.

RTB depends on 3 things: 1) inventory, which depends on how many people hit the page to generate an impression, 2) clicks, which depend on people clicking something, and 3) bidders, the more niche you get, the fewer bidders there will be. Inventory does not change rapidly. Clicks take time to accumulate (to yield click rates, which are a necessary ingredient in the RTB calculation). And if there are too few bidders the price of the auction “item” won’t appreciate or depreciate much or rapidly. Because of these 3 things, making bidding real-time versus non-real-time (i.e. overnight) may not make it significantly better or move the needle much on efficiency and ROI.

And RTB will still not save “display” ads. The golden age of display was in the mid 90s when people tolerated ads when they read content. They are now trained to avoid looking at the top and right of web pages So while RTB may increase the ROI of display ads by increasing click rates from a percentage with too many zeros to count to something sligtly higher, display ads are still ignored by users and will still not generate measurable business impact for advertisers.

Great video of the evolution of the Internet; how its basic features changed the habits and expectations of users and therefore changed entire industries forever. It takes the viewer from web 1.0 to web 2.0 to web 3.0 and explains the transitions and the tipping points that once passed lead to irreversible changes. Industries trying to hang on to old business models and processes will die and be replaced by new industries operating at new plateaus of efficiency.

placeshifting – watching TV at whatever-the-hell-place they want

niche-busters – blockbusters but for smaller (niche) audiences

analog dollars for digital dimes – with the greater efficiency and measurability of advertising in digital mediums, for every dollar taken out of analog mediums, only dimes need to be put back into digital to achieve similar or greater effect

I know I am wasting half of my ad dollars; I just don’t know which half — is more like “I know I am wasting 99% of my ad dollars” (banner ad click through rates are generously at 1%, which means the other 99% is known to be, for sure, wasted — no more guessing necessary).

measured media = TV, print, radio — which equals not really measurable at all

2009 is the year of the “open agency model.” Many of the largest brands have declared that they are going “open agency mode” in search of lower cost, greater efficiency, and possibly better work. But while this idea may be good in theory, it is very difficult in practice. Having run a “virtual company” since 1996, I know of the challenges, as well as the upside. And the conventional wisdom of “you get what you pay for” holds very true here. I’ve outsourced to China and India to varying degrees of success and usually it took more time to communicate and re-communicate, do and re-do to get things right. And it ended up costing more overall, despite lower unit costs. Furthermore, most clients are brand experts of their own brand, but may not have the depth of experience in managing complex, global deployments … or perhaps even experience in managing photo shoots. Although it may be fun to go on photo shoots, but that doesn’t mean clients can manage that themselves. And having an inexperienced, small agency do it may not be that much more efficient either.

Anheuser-Busch Whacks Retainers for Its Agencies

2009 has also been declared the year of search and social marketing. Many of the biggest brands now realize they must do something in search in order to be found when users are out looking for something. Knowing that 80% of online journeys begin with search (Forrester April 2008), it is more important than ever to be “findable” — after all, if they can’t find you, you don’t exist. Companies are also looking for efficiencies in social marketing — literally having people carry forth their message or amplify it for free. This is a good move because most modern users trust their peers far more than they trust an advertiser’s ad message anyway, according to countless studies.

Digital Consigliere

Dr. Augustine Fou is Digital Consigliere to marketing executives, advising them on digital strategy and Unified Marketing(tm). Dr Fou has over 17 years of in-the-trenches, hands-on experience, which enables him to provide objective, in-depth assessments of their current marketing programs and recommendations for improving business impact and ROI using digital insights.