Share

advertisement

The blinders and focus that work well to optimize the details of a problem may prevent managers from seeing other options.

This headline tested well and generated lots of clicks. Ergo, all readers must be happy. Right?

The improvements that analytics bring to a process can be downright addictive. Collect data; scrutinize model; refine process. Lather, rinse, repeat. With each iteration, metrics creep upward and results improve. Unfortunately, improvements may get smaller and smaller, while each incremental gain gets increasingly difficult to achieve — and the result may be locally optimal for the tightly defined problem, but not globally optimal for the larger managerial problem.

It’s true that variations of tantalizing headlines may generate more impressions for an article. Or that ever-more-refined images and copy may lead to more clicks on an ad. In each case, data collection and analytical models can improve a metric. That data can lead to deep understanding of what works best in a narrowly defined problem.

However, improving an unambiguous metric is rarely the overall goal. In the case of articles, impressions are an imperfect measure of the number of people who actually read and derive value from an article. For ads, the number of clicks is in a rough indicator of the amount of potential sales that may come from the ad. Neither metric is the true goal.

Metrics can help measure and improve progress towards an overall goal and are critically important to the use of analytics. But intense focus on a narrow measure can address only the well-specified puzzle — a myopic view of the problem.

We are selling analytics short if we stop there. The data and methods embedded in analytical approaches offer more. They offer the ability to explore unexpected relationships that may not only solve the immediate puzzle and climb the top of a local hill, but also find unforeseen options.

Given the hype around analytics, some disillusionment may be inevitable. With gains coming more slowly or requiring more effort, executives may suspect that they have peaked out on a local optimal. Or they may desire more from their analytical investments … or look for something completely different. But how?

Step back from the specific problem: The blinders and focus that work well to optimize the details of a problem may prevent managers from seeing other options. Sometimes a third party can help. Epsilon, the global marketing company, recently provided that perspective for a Texas utility. The advertising campaign Epsilon came up with, “Buy less of what we sell,” was unusual but led the utility to “increase the efficiency of acquisition and reduce churn” — and as a result, the utility has a greater number of customers who each use less energy. But Epsilon didn’t just suggest the idea, they followed up the suggestion with a pilot to see if a completely different campaign would work better than incrementally improving the current campaign.

Test “known” assumptions: Defining a business problem invariably involves making abstractions and assumptions, such as how potential customers behave, what makes up the sample being studied, or what options should be emphasized. To explore new solutions, organizations must be willing to test these assumptions. After analyzing the data collected from their Nike+ sensor, Nike found that people tend to work out in the evening (after work?) and set new goals in January (New Year’s resolutions?). Neither conclusion is surprising in the least. But was it worth testing? Yes. Not only was it a great way to test Nike's new sensor, but it tested assumptions about their customers’ behavior. What if the assumptions weren’t true? It could have led to new markets or new ways of marketing and Nike would have been on the forefront. The key point is that by testing its assumptions, Nike explored a new possibility. Fast forward a few years later: Building on the Nike+ data and other exploratory initiatives, Nike now lets designers improve the sustainability of materials. The prior assumption was that manufacturing and production efforts should be the focus of sustainability improvements. Data and exploratory analysis helped Nike find another, possibly more rewarding, optimum to pursue.

Build a foundation for easy exploration: If every exploratory test requires a large investment of resources, people will be less likely to explore and less forgiving of analysis that doesn’t yield a Eureka. To counter this, organizations need to make investments in data, systems, and governance that promote discovery. At the Veterans Administration, the Million Veteran Program is an example of this, as it creates a massive dataset of health information. While there are numerous specific short-term goals of the program, a large potential benefit is in answering questions that haven’t yet occurred to people. This is where incremental health care improvement initiatives can create breakthroughs.

Be open to exploration: Organizations and individuals can both be myopic. Our forthcoming annual report on data and analytics finds that organizations that are using analytics to innovate and are gaining competitive advantage from analytics embrace exploratory activities.

There are risks inherent in exploratory activities. Some will not work out. Each attempt to explore uses resources that could also be used to make incremental improvements; there is a tradeoff between seeking incremental and breakthrough results.

Neither approach makes sense alone. Executives who desire bigger breakthroughs will need to encourage exploration, even though it will mean some failures and investment without immediate payback. And analysts will need to think more broadly about the problem to move from incremental improvements to another that may have more long-term potential.

advertisement

About the Author

Sam Ransbotham is an associate professor of information systems at the Carroll School of Management at Boston College and the MIT Sloan Management Review Guest Editor for the Data and Analytics Big Idea Initiative. He can be reached at sam.ransbotham@bc.edu and on Twitter at @ransbotham.

About the Author

Sam Ransbotham is an associate professor of information systems at the Carroll School of Management at Boston College and the MIT Sloan Management Review Guest Editor for the Data and Analytics Big Idea Initiative. He can be reached at sam.ransbotham@bc.edu and on Twitter at @ransbotham.