Tag: Analytics

I attended the February PDMA event hosted at Optum. The Minneapolis chapter had arranged both a tour of the Optum facility and a panel discussion on the intersection of Product Management, Product Development, and Data Analytics. The event started with a networking session while tours were conducted. The tour featured several customer showcase areas. The first was visiting a large digital analytics command room that was several stories tall and covered in monitors with real-time information showing a vast amount of healthcare related actives across the united states. This room could be used to monitor the outbreaks and spreading of disease and the activities of the healthcare providers. It allows Optum to actively support the healthcare system with early identification of trends and coordination of activities. We headed into a detailed analytics room that featured individual station with large interactive touch screens. Our tour guides took use through numerous analytics scenarios with real-time drill down into trends, treatments, and member services that were possible with the depth of data they have been able to integrate and create the capabilities to explore and interact with data. This section of the tour concluded in a large surround screen video experience around the future of healthcare.

The event continued with the panel discussion featuring 6 panelists ranging from corporate to consultants with various backgrounds in the product and analytics spaces. With an audience size around 100 people, they did some polling and it saw a split between attendees being more on the product side vs. pure data scientists. The panel also talked briefly about the 5 eras of product development that was broken out accordingly:

Create a product in isolation and push it out through advertising

Customer focus groups

Lean / Design Thinking / Customer Discovery

Data Science

Now we need to integrate 3 & 4

It was highly stressed that many data projects fail and the root cause is the lack of defining what value you want to get out of the data up front. Meaning you have to define the questions you are trying to answer before getting lost in analysis. While data analysis can also discover anomalies and trends along the way, that should be secondary to understanding what you are trying to learn from it. The questions also help define the “right” data you want vs. getting overwhelmed with studying “all” the data. In the end, your looking for the problem that your product/service can solve, not the offering itself in the data.

I had an interesting meeting with the MN Director of Innovation to continue our conversation around public data. In the meeting he shared several interesting stories about the power of crowd sourcing solutions with public data. A large, east coast, metropolitan had made a number of its government data sources public through a reporting platform. The goal was to see what value could be created through partnerships and crowd sourcing solutions. The initiative created a wide variety of new and valuable insights by connecting data that was seemingly unrelated. One example was combining public tree trimming data with emergency response data. Turned out to be direct corrections to intersections and the number of accidents. This informed the planning process around the importance and criticality of tree trimming around high volume intersections. Two separate government agencies, that didn’t interact prior, are now collaborating in new ways simply by leveraging the data they both had in more collaborative ways. There was little cost in this discovery, but leveraging the crowd’s creative energies.

This example reminded me a a wide number of similar projects I have done in corporate America. I’ve worked with companies to identify categories of data that are better served by being made available on public APIs. Enlightened organizations have seen huge benefits by providing APIs and cultivating data marts across multiple internal data sources. Some of the oportunities they have been able to create include:

– Crowdsourced solutions that give new insights into their business

– New applications, both PC and mobile based that serve their market or customers

– Attract new strategic partnerships around data sharing. This opened a variety of new business opportunities to expand the data analytics capabilities of both companies by have data beyond each partners current operational data. It also put those companies into a new level of partnership when insights where found and they could both respond to market opportunities in a joint venture.

– By making some data public, it provides a new data source of who and how that data is being consumed. The insights and analytics of the data consumption and how it was being used has also been of great strategic advantage to the providing companies to proactively engage the market that is consuming it by spotting the opportunities or disruptors early in the process.

– Consumption data itself can be a new revenue business for the data leaders and aggregators. Its an interesting model as your selling consumption data and in turn track who is consuming the consumption data to add to your data pool. ( Recursive in nature )

– Companies that lead in their industries in data sharing also tend to gain an advantage in setting up strategic relationships and ultimately become more of the broker of data in their industries. As the broker of data, you become the aggregator and ultimately have a larger industry view of how and who is consuming not only your data, but all the other data sources you are aggregating.

Public data leadership is similar to a land grab. Pro-activity is the key. Its never too early to lead, but for those that are followers, it will always be too late.

I had a follow up to the Cluster Analysis kick off hosted at the UMN last September 2014. Joining me on the call was the MN Director of innovation and we where exploring partnership opportunities around the collection of economic data in both the public data and innovation space. My particular interest was in capturing addition data around innovation centers the full lifecycle of start-up maturation on a regional level. Though partnerships with centers and through integration with public data sources we could get a level deeper in the economic activity happening in the corporate and entrepreneurial areas. By building on the standards already set forth in the cluster analysis we hope to define the next level of data in this area so that all regions could capture data consistently for analysis. We are looking to develop analytics for the health and activity within clusters around the time, cost, and progress made by start-ups and corporate innovation initiatives. The innovation centers provide a great base where much of this activity is happening and can be one of key sources of data for the overall model. I’ve been working with economist and other data analytics specialists to develop economic data models to this end and identifying the public, private, and NGO data partnerships that could provide valuable in a integrated data capture strategy.

I attended the 2014 Minneapolis CDO Executive Summit as a guest of the MN State CIO and Director of Innovation. The event featured an extensive number of speakers sharing their experiences in establishing practices around data aggregation and analytics. The real world perspectives from corporate professions was very insightful to the challenges and levels of investment required to make these initiatives successful. As I’m currently working on extensive models around data & analytics regarding economic development and innovation metrics I found the networking opportunities at the event invaluable and have led to a number of working groups that continue on since the event.

The event resonated with the years of experience I spent on strategic initiatives with companies, specifically in the areas of building the data analytic organizations and systems. Here is a quick top 10 considerations I constantly ran into:

These are new capabilities that have to be developed. Not to be confused with existing IT data organizations that are maintaining operational systems. It takes more time, money, and commitment than most organization understand and will require most areas of the company to participate vs. an isolated team approach.

Funding the practice will span not only technology, but a wide range of skills sets from data architecture, system integration, analytics, etc. Many companies underestimate the amount of time/cost business subject matter experts will need to be involve and participate in developing the insights that come from the analytics.

While existing system integration seems like a large task, there is a significant amount of information and meta data that your organization has NOT been capturing over the past decades. One mitigation approach is to find strategic partnerships that can leverage their data to create combined data sets that are vastly more valuable than a single organization and can cover history data gaps.

Integrating to public data sources is a vastly under utilized resource. Many state governments are working to improve systems, apis and crowdsourcing efforts to create more value out of this data. I have also seen companies create portions of internal data and created their own public share. This has paid big dividends in terms of the crowdsourcing solutions that have come out of that data and the strategic partnerships that it has attracted.

An organization should not under estimate the value of analytics of data outside of ones core customer demographics. It might be the key to understand what you don’t know about the market and customer needs.

Channel data is a frequent point for data collection investments, but many orgs fail to capture the data for cross channel analytics.

Look past just the data in the channel, but into the value chain of systems, orgs, and partners that the channels trigger. Many organizations do not understand the operational considerations that channel activation puts on their own organizations. Especially when adding new channels.

Partner and supply chain data is a great source of understanding the capabilities of your partners during times of crisis, economic instability and market disruptions. Look for those anomalies to understand how better to support the strengths and weaknesses in your own business ecosystem. Pilot additional partners to compare performance and capability variences.

While many companies are focused heavily on the business analytics, perhaps of of the biggest areas of corporate improvement comes from the aggregation and analytics of internal collaboration of systems and departments. I’ve seen great organization strategies come from the study of internal communication, spending, budgeting, governance, project planning, and project outcomes metrics just to name a few. How good a dashboard to you have in watching organizational behavior and performance over time. What data are you not capturing about your own organization today?

One of the most frequent differences that I found working crossed thousands of organizations was the fact that companies didn’t really understand the maturity levels of their competition in this area. Most companies would attend a conference and come away feeling that everyone was struggling with similar problems. While this is likely true, they where missing the point that other orgs had already committed significant investment to the aggregation of data even though they where still very immature in their capabilities to exploit it. In many cases they didn’t realize their competition already had gained years of data collection in new and strategic data partnerships and they hadn’t even begun yet. Most fail to consider the value of time in terms of data collection. It is hard to go back in time and get the data you failed to capture and your falling behind every day that passes.

In future blogs I will dive more deeply into each of these considerations to explore how different organizations approached each area and what outcomes came for their efforts.

I attended an executive salon event hosted by one of the global marketing agencies in Minneapolis. The theme of the presentations focused on the roles of aggregating social data for big data analysis. Two concepts jumps out during the conversation that drove most of the post presentation discussions.

The first was the topic of cross linking social accounts with corporate CRM systems. This was starting to expand a big data picture of the customer. While this is common practice today, they extended the concept to include cross liking with other identity sources such as LinkedIn or Public Tax Records. This started to create a vastly larger profile set of individuals outside of your customer base. These larger sets of profiles could be used to identify trends and patterns that could be leveraged for approach and enticing new customers to your brand or new offerings.

The second topic built on the first but was much more elaborate. They had some guest speakers from new ISVs that where building tools for markets to access a massive big data pool that had been assembling. Several years prior they had launched a backend platform that was constantly listening and recording many social media channels. The platform would be analyzing the content and generating additional meta data and tagging of content to aid in ongoing analysis. An elaborate architecture of meta tag hierarchies where defined to provide categorization of subject matters. Even more impressive was the ontologies that where defined between the hierarchies to cross relate topics. The end result in the analysis seemed to be an enormous multiplier in the ability to cross-relate cross channel data and inter-relate thematic trends and insights. Since seeing this demo I’ve noticed several new companies building out these types of solutions. While the science side of the platforms to do this is fairly straight forward it is the Art of creating the inter-relationships of the ontologies across the hierarchies that will define the state-of-the-art of competitive analysis.