Hoover Dam created Lake Mead 80 years ago, capable of storing over 8 Trillion gallons of water. The 5 years it took to build the dam was a safe bet as water hasn’t changed terribly much in the past eons and reservoirs have been around for over 5000 years. The term ‘Big Data’, per a NY Times Bits column, is from the late 1990’s and the underlying Hadoop database was only invented in 2005. It’s a far safer bet to invest heavily to keep water in a central place than it is to make your own Lake Mead of Data.

Still, the insurance industry seems to be going the Lake Mead route. All too often, a Big Data strategy is a multi-year push to shove every piece of data a company can get into an uber data warehouse expecting some Big Data Analytics tool will come along and reveal previously unknown relationships. Will this mass of data take on its own purpose, requiring constant alignment to your business goals, i.e. is too inwardly focused, or someone telling you in a year, “you never asked for it, so we don’t have it and quite frankly, we can’t even store it in our database?” Can you have too much data, and not enough insights? Does the past axiomatically predict the future as the predictive analytics vendors claim? Ironically, Lake Mead’s water level is falling due to unforeseen consumption and climate changes. Pouring tons of concrete does not imply continuing viability.

The NY Times had an OpEd article on 4/7, from 2 NYU professors, Gary Marcus and Ernest Davis, highlighting the potential hazards of relying too much on insights by number crunching. Not the least, and most relevant to the ‘Big Data needs the Big Warehouse’ approach, is

‘If you look 100 times for correlations between 2 variables, you risk finding, purely by chance, about 5 bogus correlations that appear statistically significant – even though there is no actual meaningful connection between the variables’

My 2 favorite examples in the piece are the extremely strong correlation, from 1998 to 2007, between increased Autism diagnoses and increased sales of organic foods. Similarly, from 2006 to 2011, the murder rate and market share of Internet Explorer both went down sharply.

Why are you being pushed into the biggest Big Data implementation? Probably because, as that gangster once said, “That’s where the money is.” It’s a combination of IT responding to Board pressure for business benefits to support budgets, and vendors in a feeding frenzy before this also becomes yesterday’s hype. Tech industry reports show BI revenues growing to over $50B by 2017. Who wouldn’t like a piece of that? Consulting companies will tell you it’s hard, and takes over a year, if not years. If you implement Big Data the usual way, it is hard, there aren’t enough Data Scientists to make sense of all the information in the universe, tools with sex appeal, but without insurance content, appear every day via email announcements, and budgets are exceeded with little to show for it.

Most of today’s Big Data oriented Data Warehouses, and especially the underlying infrastructures, aren’t going to handle the Internet of Everything exceptionally well, or at all, which will become apparent when telematics driven usage based pricing becomes standard in just a few years, rather than today’s 2% market share. Most companies are just starting to think through the Big Data implications of an Internet of Everything based insurance industry, where Google states their autonomous vehicle generates about 1 GB of data for every second of driven time. Many newer cars generate approximately 100 MB of data per driven second. Take away irrelevant elements such as tire pressure, RPM, etc, and even of you cut it by 85%, the volume, when multiplied by just the 250M cars currently registered in the US, is staggering.

Before you build your own Lake Mead of Data, short-term, widely deployed, business function specific BI solutions may be more useful right now until the collective technology, automotive, wireless data and insurance industries think through implementation and operational realities. Here is an analogy – I live near a congested and dangerous State highway, concrete poured in the 1930’s, designed, and implemented without extrapolating to today’s volumes. With development on both sides of the highway, it cannot be adapted to current, let alone projected, volumes. We learned to live sluggishly and to hold our breath when we approach an entrance.

Here are 2 tips based on our experiences:

1 – Be audacious, think of Big Data as part of a Product Roadmap – start with today and think stages.

Blow right by “enhancements” or ‘’incremental” improvements. As Ray Kurzweil said, “take 30 linear steps and you end up 30 paces away, but if you think exponentially, you wind up a billion steps away.” Think the uncomfortable:

“If I gave a really smart 20 year old $10K, how would they affect my customer acquisition and retention process? What benefit justifies my Big Data spend if this college sophomore can disrupt me after dinner?”

Many Health insurers, for example, are in the early stages of revolutionizing their business through deeply integrated social apps, tying wearables to doctors to hospitals to patients to pricing.

Big Data will change insurance products from static entities into a more dynamic world where increased data and analytical capabilities will shorten product lifecycles to a year. Just as tech vendors think of their offerings over time via a phased Product Roadmap, insurers need to do likewise where Big Data is simply an ingredient, which in itself, will change over time.

How will customers use my product in their daily lives? How will new data sources and types define these new products? In 10 years or sooner, will we continue to be an insurance company with a Digital presence, or will we evolve into a tech-focused company, one of whose main revenue sources is insurance? Will pouring all this Big Data concrete today contribute to, or impede, future agility?

Big Data does not axiomatically require Big Money upfront – it needs Big Innovative Thought. “Talk is cheap, show me the code,” Linus Torvalds (the developer of Linux) said. “Data is everywhere, show me the future” is what we should be demanding.

Follow Blog via Email

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 44 other followers

Richard Eichen Bio

Richard Eichen specializes in turnaround, project rescue and performance improvements in companies and operating units heavily dependent on delivery via technology such as Insurers, banks, Asset Managers and SaaS, BPO and technology vendors. Richard Eichen is highly experienced in organizational diagnostics, reformulating culture and organizational structures, as well as Company and Product Roadmapping. He then executes on this new vision.
Richard Eichen is brought into fluid situations to restore health and momentum, often having to ‘tell truth to power’ to achieve results. Leveraging his Customer Experience focused process analysis experience, as well as his hands-on operating management experience, Richard Eichen is highly experienced in executing dynamic transformative strategies, culture change, finding new revenue sources and creating sustainable operational improvements, often as part of reversing over-budget project trajectories and restoring Top Line Revenue and EBIDTA growth. Richard Eichen also renegotiates license and maintenance agreements to reflect the new reality. He has also been highly effective acting as a Trusted 3rd Party, operating a business during negotiations for a change in ownership.
Richard Eichen can be reached at richard.eichen@growroe.com