If you want to learn about data, Chris O’Hara is the right person to ask. O’Hara, who leads global product marketing for Salesforce Marketing Cloud’s suite of data and audience products, is a big believer in the data revolution—but first, marketers need to take stock of what data they actually have.

“Some marketers think they have way more data than they actually have, and others think they don’t have a lot of data but actually do,” O’Hara said.

Before joining Salesforce, O’Hara was at Krux, the data management platform that Salesforce acquired in 2016, working on data marketing. In October, O’Hara, along with Krux alums Tom Chavez and Vivek Vaidya, released a book, “Data Driven,” which dives into how marketers should think about using data to overhaul customer engagement and experience.

Before the book’s release, Adweek talked with O’Hara about the book and about how marketers can leverage the data they have while keeping data privacy and consumer trust in mind. A portion of that conversation, which has been edited and condensed for clarity, is below.

Adweek: A lot of marketers have talked about the importance of getting better at explaining to consumers what exactly is being collected and how exactly data is being used. Do you think it’s the responsibility of tech and advertising companies to explain that to the public?

Chris O’Hara: Marketing is better when you have the permission of consumers. Consumers are entitled to know exactly how their data is being used, and consumers are absolutely entitled to have control over their own data. As you talk about the opportunities to get more personalized with customers, you’re allowed to deliver great personalization if the customer has opted in for you to do that on their behalf. If you do that without their consent, it feels creepy and wrong, right? It’s common sense. We’re always going to lead with the idea that trust comes first and that marketing is better with consent. Period.

You write in your book that the biggest risks of harnessing data are centered around privacy, security and trust. As concerns about data privacy grow, and as data breaches continue to occur, how does the industry best rebuild trust with the public? Where does the industry start with reestablishing trust and maintaining trust with consumers?

It’s all based on permission and an opted-in consumer. I like getting advertising messages that are relevant. When I am shopping for a car and I give Cars.com permission to introduce me to new models and send me an email every week, I appreciate it because I’ve asked for it. When I engage with certain sites on the web, like The Wall Street Journal, where I pay for content, I trust them with a certain amount of my data so they can make my reading experience better. That’s the way it should have been, always. Unfortunately, there are some companies in the space that have taken advantage of little oversight to do otherwise. But what we’ve seen in the market is that companies that are not leading with trust are not being valued as highly or perceived as more valuable than companies that do put trust at the center of their relationship with customers.

What’s the biggest misconception marketers have with data?

Something we write about in the book is that some marketers think they have way more data than they actually have, and others think they don’t have a lot of data but actually do. One of Pandora’s svps, Dave Smith, came to us and said, ‘I have one of the biggest mobile data assets in the world. Everyone who uses Pandora is logged in, so we know so much about our customers: what kind of cellphone they have, what kind of music they like, perhaps the ages of the kids in their home, when they listen.’ That’s a lot of data. Pandora probably has one of the largest data assets in the entire world. But Pandora doesn’t know when people are going to buy a car or people’s incomes, necessarily. They don’t know when you’re planning on taking a family vacation. So they turned to second- and third-party data to enrich their understanding of consumers.

Advertisements

Share this:

12 Dec 16 | Author Linus Gregoriadis| Headline Marketing and IT functions need to work together more closely to achieve the quality of digital infrastructure their organizations need to succeed in an increasingly unforgiving world, according to new research published this week by ClickZ Intelligence.

A survey of both senior marketing and IT professionals has revealed that there are significant differences between these two core business functions in their perception of organizational priorities and the quality of digital infrastructure. Governance frameworks to ensure better alignment between the CMO and CIO are often lacking.

The Backbone of Digital report, freely available from ClickZ (registration required), has also found that, compared to their colleagues in marketing, IT professionals have a much rosier view of the customer experience their companies are delivering across digital channels.

Below I have outlined more detail around three key findings from the research which is sponsored by communications infrastructure services company Zayo.

IT pros have exaggerated view of the quality of their companies’ current infrastructure

According to the research, 88% of IT respondents describe their company’s infrastructure as ‘cutting-edge’ or ‘good’, compared to only 61% of marketing-focused respondents, a massive difference of 27 percentage points.

The research also looks at the ability of tech infrastructure to deliver across a range of marketing communications channels, with IT respondents and marketers both asked to rate performance.

Both marketers and IT professionals felt that the best engagement and experience is delivered across desktop, cited as ‘excellent’ or ‘good’ by 71% and 93% of these groups respectively, but trailed by other channels including mobile website, mobile app, desktop display, mobile display, social and push messaging.

Across the board it is evident that those working in IT have a much more optimistic view of how well they are delivering across the full gamut of digital channels compared to their IT counterparts.

It seems likely that those working in more customer-facing departments, i.e. marketers (generally), are much more likely to be aware of deficiencies impacting customer experience which can adversely affect business performance and brand reputation (and often their own bonuses).

A lack of co-operation is undermining excellence in digital delivery

Just 19% of marketers strongly agree with the statement “marketing and IT work closely together to ensure the best possible delivery of product/service”, and only 11% strongly agreed that they “have a clear governance framework to ensure that CIOs/CTOs and CMOs work together effectively”, suggesting a lack of alignment around marketing and IT business objectives.

This compares to 45% of IT professionals who strongly agreed that “marketing and IT work closely together to ensure the best possible network performance”, and a similar percentage (46%) who strongly agreed that they “have a clear governance framework to ensure that front-end business applications and back-end infrastructure work together effectively”.

While there are differing perceptions about the extent of marketing and IT co-operation, the report concludes that business objectives need to be much better aligned to ensure closer harmony across these core business functions. If a framework to facilitate this is not put in place at the top of the organization, it becomes exponentially more difficult to implement lower down.

Speed of data-processing is crucial – real-time means real-time

Marketers are increasingly aware that the proliferation of data sources at their disposal is only of use to their businesses if they can analyse that information at high speed and transform it into the kind of intelligence that can then manifest itself as the most relevant and personalized messaging or call to action for any given site visitor.

According to Mike Plimsoll, Product and Industry Marketing Director at Adobe:

“A couple of years ago the marketing leaders at our biggest clients typically expected that data could be processed within 24 hours and that was fine.

“Now when we talk to our clients the expectation is that data is processed instantly so that when, for example, a customer engages with them on the website, the offer has been instantly updated based on something they’ve just done on another channel. All of a sudden ‘real-time’ really does mean ‘real-time’.”

The ability to harness ‘big data’ has become a pressing concern for IT departments as their colleagues in marketing departments seek to ensure they can take advantage of both structured and unstructured data and ensure the requisite speeds for real-time optimization of targeting, messaging and pricing.

More than half of IT respondents (56%) said that the ability to manage and optimize for big data was currently a ‘very relevant’ topic for their organization, in addition to 37% who said it was ‘quite relevant’.

According to Chris O’Hara, Head of Global Data Strategy at Krux Digital:

“Today, consumers that are used to perfect product recommendations from Amazon and movie recommendations from Netflix expect their online experiences to be personal, email messages to be relevant, and web experiences customized.

“Delivering good customer experience has the dual effect of increasing sales lift, and also reducing churn by keeping customers happy. Things like latency, performance, and data management are all part and parcel of delivering on that concept.”

Please download our Backbone of Digital research which, as well as a survey of marketing and IT professionals, is also based on in-depth interviews with senior executives at a number of well known organizations.

We’ve been hearing about big data driving marketing for a long time, and to be honest, most is purely aspirational.

Using third-party data to target an ad in real time does deploy some back-end big-data architecture for sure. But the real promise of data-driven marketing has always been that computers, which can crunch more data than people and do it in real time, could find the golden needle of insight in the proverbial haystack of information.

This long-heralded capability is finally moving beyond the early adopters and starting to “cross the chasm” into early majority use among major global marketers and publishers.

Leveraging Machine Learning For Segmentation

Now that huge global marketers are embracing data management technology, they are finally able to start activating their carefully built offline audience personas in today’s multichannel world.

Big marketers were always good at segmentation. All kinds of consumer-facing companies already segment their customers along behavioral and psychographic dimensions. Big Beer Company knows how different a loyal, light-beer-drinking “fun lover” is from a trendsetting “craft lover” who likes new music and tries new foods frequently. The difference is that now they can find those people online, across all of their devices.

The magic of data management, however, is not just onboarding offline identities to the addressable media space. Think about how those segments were created. Basically, an army of consultants and marketers took loads of panel-based market data and gut instincts and divided their audience into a few dozen broad segments.

There’s nothing wrong with that. Marketers were working with the most, and best, data available. Those concepts around segmentation were taken to market, where loads of media dollars were applied to find those audiences. Performance data was collected and segments refined over time, based on the results.

In the linear world, those segments are applied to demographics, where loose approximations are made based on television and radio audiences. It’s crude, but the awesome reach power of broadcast media and friendly CPMs somewhat obviate the need for precision.

In digital, those segments find closer approximation with third-party data, similar to Nielsen Prizm segments and the like. These approximations are sharper, but in the online world, precision means more data expense and less reach, so the habit has been to translate offline segments into broader demographic and buckets, such as “men who like sports.”

What if, instead of guessing which online attributes approximated the ideal audience and creating segments from a little bit of data and lot of gut instinct, marketers could look at all of the data at once to see what the important attributes were?

No human being can take the entirety of a website’s audience, which probably shares more than 100,000 granular data attributes, and decide what really matters. Does gender matter for the “Mom site?”Obviously. Having kids? Certainly. Those attributes are evident, and they’re probably shared widely across a great portion of the audience of Popular Mom Site.

But what really defines the special “momness” of the site that only an algorithm can see? Maybe there are key clusters of attributes among the most loyal readers that are the things really driving the engagement. Until you deploy a machine to analyze the entirety of the data and find out which specific attributes cluster together, you really can’t claim a full understanding of your audience.

It’s all about correlations. Of course, it’s pretty easy to find a correlation between only two distinct attributes, such as age and income. But think about having to do a multivariable correlation on hundreds of different attributes. Humans can’t do it. It takes a machine-learning algorithm to parse the data and find the unique clusters that form among a huge audience.

Welcome to machine-discovered segmentation.

Machines can quickly look across the entirety of a specific audience and figure out how many people share the same attributes. Any time folks cluster together around more than five or six specific data attributes, you arguably have struck gold.

Say I’m a carmaker that learned that some of my sedan buyers were men who love NASCAR. But I also discovered that those NASCAR dads loved fitness and gaming, and I found a cluster of single guys who just graduated college and work in finance. Now, instead of guessing who is buying my car, I can let an algorithm create segments from the top 20 clusters, and I can start finding people predisposed to buy right away.

This trend is just starting to happen in both publishing and marketing, and it has been made available thanks to the wider adoption of real big-data technologies, such as Hadoop, Map Reduce and Spark.

This also opens up a larger conversation about data. If I can look at all of my data for segmentation, is there really anything off the table?

Using New Kinds Of Data To Drive Addressable Marketing

That’s an interesting question. Take the company that’s manufacturing coffee machines for home use. Its loyal customer base buys a machine every five years or so and brews many pods every day.

The problem is that the manufacturer has no clue what the consumer is doing with the machine unless that machine is data-enabled. If a small chip enabled it to connect to the Internet and share data about what was brewed and when, the manufacturer would know everything their customers do with the machine.

Would it be helpful to know that a customer drank Folgers in the morning, Starbucks in the afternoon and Twinings Tea at night? I might want to send the family that brews 200 pods of coffee every month a brand-new machine after a few years for free and offer the lighter-category customers a discount on a new machine.

Moreover, now I can tell Folgers exactly who is brewing their coffee, who drinks how much and how often. I’m no longer blind to customers who buy pods at the supermarket – I actually have hugely valuable insights to share with manufacturers whose products create an ecosystem around my company. That’s possible with real big-data technology that collects and stores highly granular device data.

Marketers are embracing big-data technology, both for segmentation and to go beyond the cookie by using real-world data from the Internet of Things to build audiences.

It’s creating somewhat of a “cluster” for companies that are stuck in 2015.

Marketers have always craved access to quality audience at scale. That was once as easy as scheduling buys on the top three broadcast networks and buying full-page ads in national newspapers. Today, the world is more complicated, with attention shifting into a splintered digital universe of thousands of channels across multiple media types.

Ad tech companies have tried to corral a massively expanding world of inventory in ad exchanges, along with the means to bid inside them. This “programmatic” world of inventory procurement is deeply flawed, yet still the best thing we have at the moment.

It’s flawed because it mostly offers access to commoditized “display” ad units of dubious value and struggles to deliver real audiences, rather than robots. But it’s also good because we have taken the first steps past a ridiculous paradigm of buying media through relationships and fax machines, while starting to bring an analytical discipline to media investment that is based on measurement.

So, as we sled the downward slope of the programmatic buying Hype Cycle, we are starting to see some new trends in inventory procurement – namely, a strategy that involves replacing some or all of the licensed programmatic architecture, as well a growing reliance on one’s own data.

But first, before we get into the nuts and bolts of how that works, some history:

The Monster We Created

After convincing ourselves of the lack of scalability in the direct model, where we would call an ad rep, we have set up a lot of distance between a marketer and their desired audience.

Imagine I am a cereal manufacturer and have discovered through media mix modeling that digital moms on Meredith sites drive a lot of offline purchases. They are the “household CEOs” that drive grocery store purchasing, try new things and are influential among their peer group, in terms of recommending new products. In today’s new media procurement paradigm, there are many “friends” standing between my target and me:

Media agency: This is a must-have, unless marketers want to add another 100 people to their headcount with an expertise in media, but this adds 5% to 10% in costs to media buys.

Trading desk: Although many marketers are starting to take this functionality in-house, whether you trade internally or leverage an agency trading desk, you can expect 10% to 15% of media costs to go to the personnel needed to run this type of operation.

Demand-side platform (DSP): Don’t forget about the technology. A 15% bid reduction fee is usually required to leverage the smart tools necessary to find your inventory at scale across exchanges.

Private marketplace: But wait! We use private marketplaces to make exclusive deals among a small pool of preferred vendors. Yes, but they operate inside DSPs and carry transactional fees that can add between 5% and 10% extra.

Third-party data: You can’t target effectively without adding a nice layer of audience data on your buy, but expect to pay at least $1 CPM for the most basic demographic targeting – a significant percentage of cost even on premium buys.

Exchanges: Maybe you pay for this via your DSP, but someone is paying for a seat on an ad exchange and that cost is passed through a provider, which can add another several percentage points.

Supply-side platform (SSP): It’s not just the demand side that needs to leverage expensive technology to navigate the new world of digital media. Publishers pay up to 15% in fees to deploy SSPs, a smart inventory management technology to help them manage their “daisy chain” of networks and channel sales providers to get the best yield. This is baked into the media cost and passed along to the advertiser.

Ad server: Finally, the publisher pays a fee to get the ad delivered to the site. It is a somewhat small price, but one that is passed along to the advertiser, usually baked in to the media cost.

This is essentially the middle of a crowded LUMAscape, a bunch of different disintermediating technologies that stand between an advertiser and the publisher. Marketers pay for everything I just described. They may not license the publisher’s SSP for them, but they are subsidizing it. After running this gauntlet, marketers with $10 to spend on “cereal moms” end up with much less than half in media value – the amount the publisher ends up with after the disintermediation takes place. This can be anywhere from 10% to 40% of the working media spend.

That’s probably the biggest problem in ad tech right now.

We’ve essentially created a layer of technology so gigantic in between marketers and audiences, that 60% to 70% of media investment dollars land up in venture-funded technology companies’ hands, rather than the media owner creating the perceived value. How do we change that paradigm?

Leapfrogging the Middleware

Data management technology is increasingly replacing some of the middleware in this procurement equation, effectively writing the third chapter in the saga we know as programmatic direct.

Here is a bit of background.

What I call “Programmatic Direct 1.0” was the short-lived period in which companies leveraging the DoubleClick for Publishers (DFP) ad-serving API built static marketplaces of premium inventory.

For example, a premium publisher like Forbes might decide to place a chunk of 500,000 home page impressions in a marketplace at a $15 CPM. Buyers could go into an interface, transact directly with the publisher and secure the inventory. The problem that inventory owners had a hard time valuing their future inventory and buyers weren’t keen to log into yet another platform to buy media. This phase effectively ended with the Rubicon Project buying several leaders in the space, ShinyAds and iSocket, and AdSlot taking over workflow automation software provider Facilitate Media. Suddenly, “programmatic direct” platforms started to live inside systems where media planners actually bought things.

Programmatic direct’s second act (2.0) is prevalent today. Companies use deal IDs or build PMPs within real-time systems and exchanges to have more control over procurement than what is available in an auction environment. Sellers can set prices and buyers can secure rights to inventory at a set, transparent cost. This works pretty well, but comes with the same gigantic stack of providers as before and includes additional transaction fees. This is akin to making a deal to buy a house directly from the owner, but agreeing to pay the real estate broker fee anyway. The thing about programmatic direct transactions is that they are fundamentally different than RTB because they don’t have to take place in “real time,” nor do they involve bidding. A brand-new set of pipes is required.

“Programmatic direct 3.0” – or whatever we decide to call it – looks a bit different. Let’s say the big cereal company uses a data-management platform (DMP) to collect its first-party data and creates segments of users from both offline user attributes and page-level attributes from site visitation behavior. The marketers have created a universal ID (UID) for every user. Let’s imagine they discovered 200,000 were females, 24 to 40 years old, living in two-child households with income greater than $150,000 and interested in health and fitness. Great.

Now imagine that a huge women’s interest site deployed its own first-party DMP and collected similar attributes about their users, who were assigned UIDs. If the marketer and publisher have the same enterprise data architecture, they could match their users, make a deal and discover that there’s an overlap of 125,000 of users on the site. Maybe the marketer agrees to spend $7 CPM to target those users, along with users who are statistically similar, every time they are seen on the site for November.

The DMP can push that segment directly into the publisher’s DFP. No trading desk fees, DSP fees, third-party data costs or SSPs involved. The same is true for a variety of companies that have built header bidding solutions, although they see less data than first-party DMPs.

With this 3.0 approach, most of the marketer’s $7 is spent on media, rather than a basket of technologies, and the publisher gets to keep quite a bit of that revenue.

Savvy marketers use data mining, data visualization, text analytics, and forecasting to make more effective decisions and reach customers. But the savviest among them are innovating with fresh types of data—and attracting new business as a result.

Sensing Opportunity for an Upsell
“The data that devices collect are going to add all kinds of context to advertising,” says Chris O’Hara, cofounder of Bionic Advertising Systems, a digital advertising service. Marketers can know exactly where potential consumers are, the current time and temperature, and which of the consumer’s friends are nearby.

When might such factors come into play? O’Hara gives the example of sensors in grocery stores that can detect the items shoppers take off the shelves. That data, run through huge databases, enable marketers to instantly suggest—via tech such as smartphones or electronic shelf displays—other products for shoppers to add to their carts.

Adding Location
More and more, geography will help marketers zero in on demographics, says Kevin Lee, CEO of online advertising and marketing firm Didit. “Geotargeting is a great way to market not only at the hyper-local business level but also for national marketers looking to target specific demographic and psychographic groups,” he says.

Marketers have experimented for years with mobile geolocation-centered campaigns, primarily using couponing. However, since research shows that a whopping 72% of consumers say they’ll respond to sales calls-to-action within sight of the retailer, there are plenty more location-based opportunities that encourage customer loyalty, such as special gifts, alerts to flash sales and early access.

Cooking a Data Stew
With the evolution in data analytics, marketers can now mix different types of data to glean new insights. David L. Smith, CEO of media agency Mediasmith, sees this as the coming of age of the data management platform: tools that integrate data from several sources, including customer information, website data and digital advertising input. All of it serves to improve messaging.

“Messages that come from ad campaigns, direct mail and other communications to the consumer can be coordinated,” Smith says, “so that the consumer is always getting relevant information—not just standard communications.”

Collecting Data—While Respecting Customer Privacy and Security
All these data-driven trends can bring benefits to the consumer and improve marketing efficiency. But they also raise privacy and security issues—to which marketers are giving serious attention. “Privacy is going to remain a constant fear in the consumer’s heart,” says Michael Hardin, dean at the University of Alabama’s Culverhouse College of Commerce. “A lot of companies are going to be struggling mightily to deal with that.”

Smart marketers will learn how to walk this fine line and mine significant value from relatively little personal information, says O’Hara. One company strikes this balance with one of its products, an activity-tracker wristband: With just a little personal data input from its user, the wristband gives them athletic performance feedback.

These new technologies are changing the world of marketing—especially given the speed at which data are arriving, says Hardin. Shrewd marketers are contemplating how best to react in a way that benefits their companies.

Christopher Skinner sold a search marketing company called Performics to Google as part of its Doubleclick acquisition. He now runs a software company called MakeBuzz that is on track to spend almost $100 million in media this year. Clients include Google, Target, and Oreck.

Its premise is simple: People buy the stuff their neighbors buy. By starting wide with media that builds a brand halo and then, optimizing into specific geographic areas where buyers are found, MakeBuzz optimizes against profit only.

Most marketers are obsessed with reaching individuals, but Skinner’s concept is almost contrarian: Spend more media up front, target by neighborhood and city, and be completely media-agnostic. The MakeBuzz code guides the optimization process until profitability KPIs are met. I recently sat down with Skinner to learn more.

The CMO Site: What’s the big idea here?

Christopher Skinner: Most people online today can measure a brand, but they can’t grow it. The methods to measure are not the same as those used to grow. You need a different framework and nobody is talking about that online.

Digital media agencies today are being handed money — money from traditional budgets — and asked to perform and hit the business targets but they don’t know how because they’ve lived inside the efficiency world for so long. It relates to neo-classic economic thinking: What you can’t measure, just ignore it.

On average we increase media spend by six times or more because we install a framework and technology that justifies the complete customer journey. We tie marketing to the economics of the business.

The CMO Site: Is profit optimization real, or are you just adding some process to what should be the CMO’s primary KPI?

Skinner: Both. It’s real and it is a formalized process. The software shows you how to tip the scales in favor of revenue by spending the right amount on media directed to the right group of customers. It helps you achieve maximum profitability on a market-by-market basis.

The CMO Site: You take a rather contrarian view. Most folks are buying audience by the impression, but you carpet-bomb geo-targeted areas with impressions. Which method is right? Can they be used together?

Skinner: Hyper-audience targeting based on cookies will deliver incredibly efficient sales, but you’re not going to see massive volume from this. You’re not going to move the needle on the business. I wouldn’t call what we do “carpet-bombing.” We’re delivering a large volume of impressions to areas that have a reasonable volume and high density of the target customer. We are looking at real social circles and matching media to these audiences, down to small pockets when needed. This is going to get you a little less efficiency but a lot more sales — a lot more profit volume. And isn’t that what matters?

The CMO Site: So, if I find the right neighborhood for a certain type of vehicle, I should just buy lookalike neighborhoods. How does that scale?

Skinner: Instead of drawing circles around virtual groups online, we draw the circle around concentrated groups of people that we know are likely to be interested in what we help market. And the fact that they are influenced by each other — they see what their neighbors wear and drive and what kinds of phones they use — means they are more likely to be influenced by media that reinforces and re-suggests those choices.

Scalability is about testing your way in. Identifying high-value areas, testing media to discover your profitability, then scaling to similar areas.

The CMO Site: What kind of media works best? It would seem that the more granular the geo-target, the better the performance.

Skinner: You need media that addresses the entire customer journey, from early awareness branding media to direct response purchase phase media. Most businesses are fine with the direct response online media, but they are missing brand-creating media. Our methods do a really good job of justifying media that helps drive direct response. The earlier phase media tends be display, mobile, and video, but can also be search (SEM).

As far as geo-targeting granularity, as long as the density of our target segment is good in each area and we’re hitting them with the right media plan, it works great. Think of each step as a filter: 1) Choose the right segment, filtering out all the less valuable potential customers; 2) Choose an area they live in high in density and volume, filtering out the neighborhoods they don’t live in; and 3) Pick the media they’re likely to be engaged with, filtering out wasted impressions. You can’t pull this off without a platform and it will not work unless the manager has a fast and simple way to buy in.

A Google Hangout with Eric Picard, CEO of RareCrowds; Chris Scoggins of Datalogix; Andy Monfried, CEO of Lotame; and Chris O’Hara, author of Best Practices in Data Management. Hosted by Stefan Tornquist of EConsultancy.