Main menu

Category Archives: Uncategorized

Predictive analytics is undeniably key for today’s marketing professional to gain insights that help grow businesses. A recent survey revealed that companies that rate themselves substantially ahead of their peers in their use of data are three times more likely to rate themselves as equally ahead in financial performance.

Predictive marketing provides value to everyone from analyst to technology experts to web content managers in all industries. Here are just a few examples:

A web experience manager can see how long an article should remain on a site before the content needs to be refreshed.

Analysts can determine which customer actions are most likely to lead to conversion.

Advertisers can predict the triggers for increasing click-through rates

A social media manager can forecast the sentiment of a specific twitter post as well as the optimal time to post a particular tweet during a time of a week.

While there is growing awareness of these advantages, predictive marketing has not become a mainstream tool. Let’s take a look at what predictive marketing can do for a retail outlet. As a marketing manager, imagine a photograph of a person with a shopping cart walking down an aisle packed with produce. What would be the most interesting analytical data one could get out of this? Looking at the shelves to see what products are depleted for forecasting? It is pretty obvious that one can track inventory using sophisticated supply chain management techniques but that’s not predictive marketing.

Predictive marketing would analyze the shoppers receipt. By looking at receipts, we can determine what the shopper’s needs and wants are. What items and how many items does the shopper typically buy? Is there a preference for self-checkout lanes or full service? Is there a time of day preference? Is there brand loyalty or price sensitivity? Are payments by cash or using a debit card, or using a credit card with a reward incentive? We may even be able to assess the shopper’s attitude toward privacy—is the name, phone number or address printed on the receipt?

Analyzing all this data enables marketers to make very useful predictions about what this shopper may do in the future, and as the number of receipts for this shopper increases, and as the receipts for all shoppers are aggregated, the ability to make predictions about individual and group behavior increases, enabling highly targeted marketing campaigns.

So how do you get started? First, you need to recognize that predictive analytics is not where you will start your analytics journey. The first step is always to get “street smart” about your data.

What should you be collecting and how should you do it? How should you be modelling data? Once you understand this, you can begin to make incremental investments in your infrastructure to support data integration—bringing all the different data sources together—and then look for an analytics software solution in order to start creating the algorithms you’ll need for prediction. I have covered several of these topics in previous blogs

Not too long ago we were excited about deconstructing the deluge of data in terms of volume, variety and velocity… Now we have arrived at a point where extracting optimum value from data means figuring out the buying behaviors next week, next season, next year. Exciting times to be in analytics.

If you are interested in deeper discussions about your specific analytics needs, you can reach me at Twitter: @shree_dandekar

Customer conversations at Clarabridge C3 conference last month made it painfully obvious: Businesses are hungry for analytics yet often struggle to see the application of text analytics beyond analyzing survey and customer experience data. While that’s not bad, there is so much more that can be done to harness the power of data analytics for example to track brand reputation in real-time.

Simply put: Text Analytics is Text Analytics is Text analytics! It’s not a technology leap, it’s the application of the technology to new sets of data like social media and new sets of questions/queries, that business might not have considered before. This data packs the potential to derive insights that enable businesses to remain competitive à the crux is in the query.

In my conversations with fellow attendees, I found that many are already advanced in their analytics maturity since they are using a text analytics platform and now crave to expand their use cases to derive insights for their business, for example they want to:

Understand and proactively engage on what is being said about their brand, industry, competitors, products, etc.

Improve customer relationships via social media

In many cases they have the tools and might only require a change in mindset to realize the full potential of social media analytics. It’s incumbent upon todays CIO to educate the business stakeholders to expand the company’s analytic capabilities to include social media analytics as an essential ingredient in their business growth strategy.

But before that’s possible let’s take a look at the journey. Whenever we talk to businesses about their social media analytics strategy we talk about a journey that begins by listening to customers, then collecting and recording the data, analyzing it, applying heuristics and business algorithms to the data to derive actionable insights from it. Essentially this means going from an ad-hoc approach to a highly optimized analytics solution. These capabilities do not get built overnight but in increments as the business develops analytics maturity. One important point to note here is that businesses not only have to make the right technology investments but also have to invest in training personnel and creating a social media analytics culture within the organization.

Thinking back about my conversations at the conference it was obvious that many businesses are ready to harness the power of social analytics beyond the text analytics investments have and ask questions they never dreamed of before.

Swissmiss is what I fix for my son every morning for breakfast. Then came SXSW 2013. @Swissmiss took on a whole new meaning for me. The fusion of high tech and art eloquently presented in Tina Roth Eisenberg’s keynote at SXSW took me completely by surprise – in a very good way.

I went to my first SXSW Interactive event on a work assignment. My expectations were mixed in that I felt there was probably a lot of hype. So much about expectations… until it hit me during Eisenberg’s keynote. Her ability to present to a large audience and yet create an immediate personal connection with me was a real eye opening moment.

Social media in the private and the professional sphere intimately connects digital technologies, software and people in all their individual facets. Eisenberg herself personified the fusion of the personal with the professional, hitting on the very essence of how social media MUST work. For me professionally that understanding is key to evolving social media analytics strategy for a business.

What might seem obvious to some, in a few days, re-shaped my outlook in terms of the power of social analytics for businesses. At SXSW, this lens allowed me to see the g….ap that still exists between how to capture social data in a meaningful way and converting it into actionable insights that move the needle.

As in any conference there were sessions whose titles sounded ground breaking but ended up being duds. There was a lot of talk about seeming opposites – the fun Vs. the measurement of Social media. While these two might seem to be on the opposing ends of the spectrum they really are not. The ability for a business to derive real time social metrics from a day-to-day “fun” conversation intertwines these opposing themes. That’s where the true value of a real-time brand reputation analytics capability lies.

Today, capabilities like that exist, but in order to realize their full potential, businesses need to start making that investment NOW!

Going back to the framework of seemingly opposing concepts, one of my main take away from SXSW 2013 can be boiled down to this. My son is an artist, I am an engineer. At this year’s SXSW I saw the amazing potential in the fusion of these two seemingly opposing fields.

The possibility of Data Anarchy is real. It can creep up on you slowly and overwhelm an IT department easily. While getting out of that mess is a good idea, it is way better to avoid getting in it in the first place. That, of course, presumes that we can recognize the early signs. So, how and why does data get out of control?

Industry dynamics are contributing to data craziness – are you surprised?

Companies are becoming more BI and analytics savvy and are collecting more data because it is cheap to store data. They are turning to their day-to-day business data to glean insights that will help them stay competitive that is to better understand their own business in terms of product performance, customer behavior, demographics etc. In an effort to improve how they do business, an Austin-based hotel scheduling company is collecting large web click data daily so that it can start performing historical trend analyses and decide their future ad campaigns.

As hardware costs continue to spiral down, commoditized storage continues to spark data hoarding. Today companies are realizing that it is very economical to store and retain data over a longer period of time. Today’s data retention solutions are also offering ways to not only store multiple varieties of data (including structured, semi-structured and un-structured) in an efficient manner but also providing front end tools to mine the data in the future. For example, the IT manager in one of the large biomedical testing labs recently decided to start storing multiple TBs of semi-structured data getting logged by 7000 sensors worldwide. Previously the data used to get flushed away on a daily basis.

Another phenomenon that is driving the explosion of data is the use of social media. Businesses are already looking at ways to build sentiment analysis applications to analyze social conversations and in that process are starting to capture social content on a regular basis.

BI tools have come a long way. Traditional BI tools were extremely good at tracking raw transactional numbers like sales figures and profit margins but failed to adequately address the root causes, or drivers, of trends in those numbers. Moreover, they were typically able to tell what happened (backward reporting) – but not explain why (unless it was evident in some other numeric data) let alone alert the business as a change emerges. The tools were complicated to deploy and operate. Users wanted self-service BI.

Over time, BI tools have evolved to support features like auto-modeling techniques, rich visualizations, metrics and auto-calculations on the fly as well as “What if” analysis. Tools now boast new in-memory technologies to enable users to quickly port data sets into memory to crank out insights quickly, thus enabling self-service BI.

End user evolution – we change, we demand more, we want it faster

The user dynamics are changing from IT controlled to end-user driven self-service led analytics. (In this time of the i-everything, BI users demand iBI – the easy, cheap and fast magic answer box.)

Traditionally IT managers were responsible for adopting the right reporting tools and giving the end-users access to consume the reports. Typically in an organization 80% of the people were consumers of data1 while the remaining 20% were actually creators of ad-hoc reports and custom dashboards. That model worked for a while but the balance of information consumers and information creator s shifted significantly. The effects of this shift manifest themselves differently for enterprises and SMB’s.

Most SMB customers fall in the category of casual data access using simple tools like Excel for their day-to-day analyses and are in dire need of self-service BI tools to help them migrate to the next level of analytics maturity. Typical SMB customers are characterized by limited IT resources and budgetary constraints which is driving them to the use of these easy to use and faster to deploy self-service tools.

Departmental IT’s within traditional enterprises are responsible for disrupting the BI ecosystem already put in place by corporate IT. The complexity and inertia of the current BI situation for end users has led to an increasing need for Self-service enabled BI tools. Users simply demand the democratization of the BI tools to gain quick and meaningful insights.

Changing IT demands – they want to help us. Really!

Democratization of BI is a thorn in the side of IT. Per IDC Digital Universe study (2011) the amount of data being stored is more than doubling every two years, and could grow by 50X by 2020 while IT staff is estimated to grow at 1.5X only! This shocking statistic in itself should be a cause of concern for today’s IT managers. Thus in addition to designing the next generation data architectures, IT managers will also need to make sure that they can disseminate this information to the business users in a easily digestible manner.

IT is still challenged with maintaining a “single version of truth” while supporting day-to-day BI needs. Today most of the IT departments within traditional enterprises have already started defining a master data framework for maintaining an authoritative, reliable, sustainable, accurate, and secure data environment that represents a “single and holistic version of the truth”. IT managers recognize the following components as the critical pieces to architecting a robust Master Data Management (MDM) framework: Customer Information File (CIF), Product Masters (BOM), Extract, Transform, and Load (ETL) architectures, Enterprise Data Warehouse (EDW), Operational Data Store (ODS), Data Quality (DQ) technologies and Enterprise Information aggregators. What is missing from this framework is the need to acknowledge the new evolving self-service enabled, in-memory BI data stores.

Next time, let’s see what we can do about this…Stay tuned!

References

[1] The Myth of Self-service BI [Wayne Eckerson, TDWI What Works Enterprise Business Intelligence v24]

In this post series I examine the challenges companies are experiencing while trying to implement self-service business intelligence initiatives through bleeding edge BI tools. Data anarchy is a real threat for many companies who jump on the band wagon of self-service enabled BI tools. I will end the series with practical recommendations for companies to avoid data anarchy.

Part 1: What is Data Anarchy?

Companies today feel the increasing need for gathering business insights from their data, and this is transforming the BI landscape. Many are looking for simple to use and easy to deploy self-service enabled BI tools to get results, fast. One of the common complaints of business users is that traditional tools have a steep learning curve and are not intuitive enough to feed data and extract insights within minutes.

Also, as the “moneyball” effect sweeps organizations, business managers try to innovate using data analytics. They want to milk the data they have to the utmost to gain insights about buying behaviors of their customers at an ever deeper level. Widespread adoption of mobile technology and social computing has driven interest in visualization capabilities and real-time analytics. And companies cannot survive (let alone prosper) without recognizing that social as a phenomenon can allow them to redefine their organizations to be inherently more fast fluid and flexible by its very design. There is some relief provided by new in-memory enabled technologies like Qlikview and Tableau but it often comes at the cost of temporarily suspending data management rules, policies or procedures leading to data anarchy.

Companies are susceptible to data anarchy arising from the growing and often hastily implemented new BI tools without thoughtfully planned data management. The effects of data anarchy are more severe for SMBs than for enterprises because enterprise size companies generally have already experienced data anarchy caused by the proliferation of data marts and departmental DWs and they are in the process of adopting robust MDM strategies to address that. But they now need to comprehend the data anarchy caused by the new BI tools as well as part of their MDM strategy.

Most of the new self-service BI tools ingest data into memory using a simple tabular format and further compress it. The ingestion process typically uses some proprietary mechanism to load the data quickly using its own unique join schema. In effect each ingestion process is now creating a unique instance of a data cube. Thus, every time a user needs to bring in new data (which includes new associations/joins and new data entities) the tool has to be re-run to create a new data cube. This approach leads to data anarchy!

The stages of Data anarchy include:

Stage 1: During this stage users are typically composing new reports and dashboards out of existing reports. In most of the cases the original data model is preserved and there is a limited possibility that a new data is created since all the insights are constructed using existing data sets.

Stage 2: In this stage users extract new data sets from the source to develop new reports and dashboards. This is where IT starts losing control of the master data rules and processes. Depending upon the type of data set created the original model may be partially or totally compromised at this point.

Stage 3: This is the stage where users start bringing in totally new data which they then mash up with existing data sets to create insights.

Stage 4: Over time multiple users start maintaining multiple sources of data cubes created from the master data and at this stage it is a data management nightmare even for the end user! There is no single version of truth and reconciling to a single version is a mammoth effort.

Organizations dealing with data anarchy need to ask themselves the following questions:

1) How do organizations prevent the suspension of rules, policies, while continuing to meet the demands for time-sensitive business intelligence results?

2) How do organizations manage multiple instances of data? Where is the single version of truth?

3) How to organizations evolve their existing data governance model to be able to address the data anarchy chaos?

4) How do SMB organizations create a data governance model out of existing anarchy? And

5) How can BI solution providers address data anarchy?

Stay tuned for the next post where we explore how we got into this mess!

Have you ever needed a business insight right away, but couldn’t get it? If yes then you have an age old problem: No instant access to data in a cumbersome IT controlled environment worsened by a steep learning curve to actually use an enterprise BI tool. The answer is not rocket science, its simply Self-Service BI. That means freedom to access data at will along with simple and easy to use BI tool to crank out actionable business insights!

How can we make this happen? How to get IT to embrace Self-Service BI? What are some of the disruptions beyond Self-Service BI?