Subscribe to Our Blog

Subscribe via Email

Marketing

Sales

Service

Agency

Email Address

We're committed to your privacy. HubSpot uses the information you provide to us to contact you about our relevant content, products, and services. You may unsubscribe from these communications at any time. For more information, check out our privacy policy.

Thank You!

The Problem With Predictive Analytics

Let's pretend you need to close 30 more customers to hit your sales target.

Luckily, you just so happen to have a whitepaper with a customer close rate of 30%. You might assume that if you get 100 more people to download that whitepaper, you’ll get 30 more customers and everything will be gravy -- because based on its past performance, that's the outcome you can expect.

If you're thinking like that, you're using predictive analytics. Predictive analytics are a way of making decisions by taking the data of past performance, and using that data to expect future performance. If you've ever worked in the actuarial side of insurance, financial services, even healthcare, there's a good chance you've come across it before.

And there's another industry that's making use of it more and more these days: marketing. The problem is, we may be using predictive analytics all wrong.

What's the Problem?

Well, the first problem is with marketing itself -- it isn't a hard science, and as such, outcomes can fall well outside of the historical range of expected outcomes. Let's take that whitepaper example. Our assumption that we could close 30 customers from it could be a terrible assumption -- because there's a chance something changed in that population that we didn't anticipate that could cause us to close less or more than the anticipated 30.

Here's a real world example of predictive analytics gone awry that you might recognize. A lot of people have used predictive analytics as investment tools to build mortgage backed securities. Using traditional statistics developed for hard sciences, they predicted a risk of default, and their models told them it was statistically impossible for everyone to default on their mortgage at the same time. But that failed to take into account certain elements of the housing market. And so what happened? Everyone depended on those models being correct, but their failure to take unknowns into account resulted in, well, some unknowns that broke the model. The unanticipated result was a lot of people defaulting on their mortgage, and a huge economic collapse in 2008.

Accounting for Unknowns

You may say well, I'll just have to take into account the unknowns then. But the thing about unknowns is just that -- they're unknown. You can try to anticipate them, but there's no way you can account for all of them. This is where Black Swan Theory comes in.

You see, people actually started to try to account for those variables by building up models for it -- and then the models got so complicated that people were so sure they were correct because ... well ... because there were a lot of models and variables and they were really complicated. They had to be right ... right?

Well, a lot of the things people build into those models are known unknowns. But the reality of unknowns is really more like Murphy's Law. The day of a big meeting, you know something bad will happen, you just don't know what that will be. (Thanks for the analogy, Katie Burke)

Let's take swans as an example. In 16th century London, there was a presumption that all swans were white because all documentation of swans denoted they had white feathers. And then in 1697, a Dutch explorer discovered a black swan in Australia -- in other words, what was once considered impossible was disproven with an unknown.

That's Black Swan Theory in a nutshell, coined by Nassim Nicholas Taleb. It's the things you didn't even know about that could change your anticipated outcome. So if you build a model that tries to account for black swans -- well, you can't. That's the point. You can't anticipate every unknown. And you have to account for that when you're leaning on predictive analytics.

The problem is, not enough people do. And that's the problem with predictive analytics in marketing.

Predictive Analytics Still Have Their Place

I've made predictive analytics sound pretty dubious thus far, but we shouldn't throw the baby out with the bathwater. Using predictive analytics is flawed when you use it to build up complex models, and then base your marketing decisions off of those models with the expectation of 100% accuracy. But I asked around to other marketers who still say there's still a lot of cool stuff you can do with it.

Matt Wainwright, Director of Marketing at Attend.com, says he uses predictive analytics to determine which pieces of content he should repromote. "What I don't do is make guarantees of specific results based on numbers like a 20% close rate vs. a 5% close rate. What I do do is make an educated decision that a close rate of 20% is probably better than a close rate of 5%, even though there might still be some things that could impact that close rate that I can't account for."

In other words, you can't divorce yourself from common sense. That's why probabilistic analytics are a better bet than predictive analytics. You account for the variables you can anticipate, use that data to make smart decisions, but don't guarantee specific results.

If you can't resist the urge to delve into predictive analytics and make concrete predictions based on past performance, do so responsibly. Predictive analytics shouldn’t be used to make predictions for big gambles -- only the low stakes ones. Luckily, marketing is usually playing lower stakes than a lot of the other industries where you might encounter this approach. Just remember that while predictive analytics are undoubtedly helpful, when you start to consider them as actual predictions, you might as well consider them prophesy.