Tag Archives: in-store measurement

Take a minute (okay – a minute and a half) to check out this video overview of our DM1 store measurement platform. It’s the shortest and crispest introduction we’ve produced so far.

As more than one famous writer/philosopher has remarked, “If I had more time, it would have been shorter.” Brevity, like wit, takes work. And practice. We haven’t achieved wit, but we’re getting close to brevity:

I also really like the video’s flow. It starts with a very short intro into the basic concept of store measurement and then introduces the platform with the Digital Planogram tool – the Configurator. When you get right down to it, this capability is the single most important part of the platform. Digital representations of the store are critical to every report and analysis DM1 delivers. And the ability to rapidly create, adjust and maintain those digital maps is essential to making the tool work.

When we first released DM1 the configurator lagged behind some of the reporting tools – not very friendly and a little prone to bugginess. Its grown into quite a good tool – a pleasure to use and capable of handling even very complex store layouts pretty easily.

From the configurator, the video flows into the Layout tool – which just maps metrics right onto those digital planograms. Not only does this show how effortlessly you move from a map of the store to a metric, but I really like the way the video works through a small set of metrics to show how easy the visual interpretation is.

Once you’ve got a feel for basic metrics in the Store Layout, the next logical step is to tackle journey. And the next two sections highlight funnel and path analysis. Both of these tools help transition thinking from a static view of store performance to a focus on shopper journey. Funnels tell you how effective the store is in moving shoppers down an engagement path. Path helps you understand which in-store paths are popular and which drive conversion. After this, it’s a quick look at the data exploration capabilities of the platform – and the ability to build reports around whatever problem you choose to tackle. Finally, it wraps up with a sample of the dashboards.

Truth to tell, I’ve sometimes done this same presentation in almost the reverse order – starting with Dashboards and ending with configuration. It’s plausible that way too, but I think this works better for analysts. Because while dashboards are the first view for end-users of DM1, for analysts their task really starts with store mapping, proceeds through various levels of analysis, and ends with wrapping a nice, neat bow around the data for others. That’s that way this video proceeds and that makes the structure more compelling and natural if that’s the way you tend to think.

It’s been a little more than a year now for me in store analytics and with the time right after Christmas and the chance to see the industry’s latest at NRF 2018, it seems like a good time to reflect on what I’ve learned and where I think things are headed.

Let’s start with the big broad view…

The Current State of Stores

Given the retail apocalypse meme, it’s obvious that 2017 was a very tough year. But the sheer number of store closings masked other statistics – including fairly robust in-store spending growth – that tell a different story. There’s no doubt that stores saddled with a lot of bad real-estate and muddied brands got pounded in 2017. I’ve written before that one of the unique economic aspects of online from a marketplace standpoint is the absence of friction. That lack of friction makes it possible for one player (you know who) to dominate in a way that could never have happened in physical retail. At the same time, digital has greatly reduced overall retail friction. And that reduction means that shoppers are not inclined to shop at bad stores just to achieve geographic convenience. So the unsatisfying end of the store market is getting absolutely crushed – and frankly – nothing is going to save it. Digital has created a world that is very unforgiving to bad experience.

On the other hand, if you can exceed that threshold, it seems pretty clear that there is a legitimate and very significant role for physical stores. And then the key question becomes, can you use analytics to make stores an asset.

So let’s talk about…

The Current State of In-Store Customer Analytics

It’s pretty rough out there. A lot of companies have experimented with in-store shopper measurement using a variety of technologies. Mostly, those efforts haven’t been successful and I think there are two reasons for that. First, this type of store analytics is new and most of the stores trying it don’t have dedicated analytics teams who can use the data. IT led projects are great for getting the infrastructure in the store, but without dedicated analytics the business value isn’t going to materialize. I saw that same pattern for years in web analytics before the digital analytics function was standardized and (nearly always) located on the business side. Second, the products most stores are using just suck. I really do feel for any analyst trying to use the deeply flawed, highly aggregated data that gets produced and presented by most of the “solutions” out there. They don’t give analysts enough access to the data to be able to clean it, and they don’t to a very good job cleaning it themselves. And even when the data is acceptable, the depth of reporting and analytics isn’t.

So when I talk to company’s that have invested in existing non Digital Mortar store analytics solutions, what I mostly hear is a litany of complaints and failure. We tried it, but it was too expensive. We didn’t see the value. It didn’t work very well.

I get it. The bottom line is that for analytics to be useful, the data has to be reasonably accurate, the analytics platform has to provide reasonable access to the data and you must have resources who can use it. Oh – and you have to be willing to make changes and actually use the data.

There’s a lot of maturing to do across all of these dimensions. It’s really just this simple. If you are serious about analytics, you have to invest in it. Dollars and organizational capital. Dollars to put the right technology in place and get the people to run it. Organizational capital to push people into actually using data to drive decisions and aggressively test.

Which brings me to….

What to invest in

Our DM1 platform obviously. But that’s just one part of bigger set of analytics decisions. I wrote pretty deeply before the holidays on the various data collection technologies in play. Based on what I saw at NRF, not that much has changed. I did see some improvement in the camera side of the house. Time of Flight cameras are interesting and there are at least a couple of camera systems now that are beginning to do the all-important work of shopper stitching across zones. For small footprint stores there are some viable options in the camera world worth considering. I even saw a couple of face recognition systems that might make point-to-point implementations for analytics practical. Those systems are mostly focused on security though – and integration with analytics is going to be work.

I haven’t written much about mobile measurement, but geo-location within mobile apps is – to quote the Lenox mortgage guy – the biggest no-brainer in the history of earth. It’s not a complete sample. It’s not even a good sample. But it’s ridiculously easy to drop code into your mobile app to geo-locate within the store. And we can take that tracking data and run it into DM1 – giving you detailed, powerful analytics on one of the most important shopper segments you have. It costs very little. There’s no store side infrastructure or physical implementation – and the data is accurate, omni-joinable and super powerful. Small segment nirvana.

The overall data collection technology decision isn’t simple or straightforward for anyone. We’ve actually been working with Capgemini to integrate multiple technologies into their Innovation Center so that we can run workshops to help companies get a hands-on feel for each and – I hope – help folks make the right decision for their stores.

People is the biggest thing. People is the most expensive thing. People is the most important thing. It doesn’t matter how much analytic technology you bring to the table – people are the key to making it work. The vast majority of stores just don’t have store-side teams that understand behavioral data. You can try to create that or you can expand the brief of your digital or omni-channel teams and re-christen them behavioral analytics teams. I like option number two. Why not take advantage of the analytics smarts you actually have? The data, as I’ve said many times before, is eerily similar. We’ve been working hard to beef up partnerships and our own professional services to help too. But while you can use consultants to get a serious analytic effort off the ground, over time you need to own it. And that means deciding where it lives in your organization and how it fits in.

Which I know sounds a lot like…

Everything old is new again

I make no bones about the fact that I dived into store measurement because I thought the lessons of digital analytics mostly applied. In the year sense, I’ve found that to be truer than I knew and maybe even truer than I’d like. Many of the challenges I see in store analytics are the ones we spent more than decade in digital analytics gradually solving. Bad data quality and insufficient attention to making it right. IT organizations focused on collection not use. A focus on site/store measurements instead of shopper measurement.

Some of the problems are common to any analytic effort of any sort. An over-willingness to invest in technology not people (yeah – I know – I’m a technology vendor now I shouldn’t be saying this!). A lack of willingness to change operational patterns to be driven by analytics and measurement and a corresponding challenge actually using analytics. Far too many people willing to talk the talk but unable or unwilling to walk the walk necessary to do analytics and to use it. These are hard problems and it’s only select companies that will ever solve them.

Through it all I see no reason to change the core beliefs that drove me to start Digital Mortar. Shopper analytics is critical to doing retail well. In a time of disruption and innovation, it can drive massive competitive advantage if an organization is willing to embrace it seriously. But that’s not easy. It takes organizational commitment, some guts, good tools and real smarts.

Digital Mortar can provide a genuinely good tool. We can help with the smarts. Guts and commitment? That’s up to you!

Continuous Improvement through testing is a simple idea. That’s no surprise. The simplest, most obvious ideas are often the most powerful. And testing is a powerful idea. An idea that forms and shapes the way digital is done by the companies that do it best. And those same companies have changed the world we live in.

If testing and continuous improvement is a process, analytics is the driver of that process; and as any good driver knows, the more powerful the vehicle, the more careful you have to be as a driver. Testing analytics seems so easy. You run a test, you measure which worked better. You choose the winner.

It’s like reading the scoreboard at a football game. It doesn’t take a lot of brains to figure out who’s ahead.

Except it’s usually not that easy.

Sporting events just are decided by the score. Games have rules and a single goal. Life and business mostly don’t. What makes measuring tests surprising tricky is that you rarely have a single unequivocal measure of success.

Suppose you add a merchandising drive to a section of your store or on the product detail page of your website. You test. And you generate more sales of that product.

Success!

Success?

Let’s start with the obvious caveat. You may have generated more sales, but you gave up margin. Was it worth it? Usually, the majority of buyers with a discount would have bought without one. Still, that kind of cannibalization is fairly easy to baseline and measure.

Here’s a trickier problem. What else changed? Because when you add a merchandising drive to a product, you don’t just shift that product’s buying pattern. The customer who buys might have bought something else. Maybe something with a better margin.

To people who don’t run tests, this may come as a bit of surprise. Shouldn’t tests be designed to limit their impact so that the “winner” is clear? ‘

Part of a good experimental design is, indeed, creating a test that limits external impacts. But this isn’t the lab. Limiting the outside impact of a test isn’t easy and you can never be sure you’ve actually succeeded in doing that unless you carefully measure.

Worse, the most important tests usually have the most macro-impact. Small creative tests can often be isolated to a single win-loss metric. Sadly, that metric usually doesn’t matter or doesn’t move.

If you need proof of that, check out this meta-study by Will Browne & Mike Jones (those names feel like generic test products, right?) that looked at the impact of different types of test. Their finding? UI changes of the color and call-to-action type had, essentially, zero impact. Sadly, that’s what most folks spend all their time testing. (http://www.qubit.com/sites/default/files/pdf/qubit_meta_analysis.pdf)

It’s usually straightforward to measure the direct results of a store test. It’s often much harder to determine the macro impact. But it’s something you MUST look at. The macro impact can be as or more important than the direct impact. What’s more, it often – I’ll say usually – runs in the opposite direction.

So if you fail to measure the macro impact of a store test and you focus only on the obvious outcome, you’ll often pick the wrong result or grossly overstate the impact. Either way, you’re not using your analytics to drive appropriately.

Of course, one of the very real challenges you’ll face is that many tools don’t measure the macro impact of tests at all. In the digital world, the vast majority of dedicated testing tools require you to focus on a single KPI and provide absolutely no measurement of macro impacts. They simply assume that the test was completely compartmentalized. That works okay for things like email testing, but it’s flat-out wrong when it comes to testing store or website changes.

If your experiment worked well enough to change a shopper’s behavior and got them to buy something, the chances are quite good that it changed more than just that behavior. You may have given up margin. You likely lost some sales elsewhere. You almost certainly changed what else in the store or the site the shopper engaged with. That stuff matters.

In the store world, most tools don’t measure enough to give you even the immediate win-loss results. To heck with the rest of the story. So it can tempting, when you first have real measurement, to focus on the obvious: which test won. Don’t.

In some of my recent posts, I’ve talked about the ways in which DM1 – our store testing and measurement platform – lets you track the full customer journey, segment, funnel and compare. Those capabilities are key to doing test measurement right. They give you the ability to see the immediate impact of a test AND the ways in which a change affected macro customer behavior.

You can see an example of how this works (and how important that macro behavior is in store layout) in this DM1 video that focuses on the Comparison capabilities of the tool.

Continuous improvement is what drives the digital world. Whether applied as a specific methodology or simply present as a fundamental part of the background against which we do business, the discipline of change and measure is a fundamental part of the digital environment. A key part of our mission at Digital Mortar is simply this: to take that discipline of continuous improvement via change and measurement and bring it to stores.

Every part of DM1 – from store visualizations to segmentation to funnel analytics – is there to help measure and illuminate the in-store customer journey. You can’t build an effective strategy or process for continuous improvement without having that basic measurement environment. It provides the context that let’s decision-makers talk intelligently about what’s working, what isn’t and what change might accomplish.

But as I pointed out in my last post, some analytic techniques are particularly useful for the role they play in shaping strategy and action. Funnel Analysis, I argued, is particularly good at focusing optimization efforts and making them easily measurable. Funnels help shape decisions about what to change. Equally important, they provide clear guidance about what to measure to judge the success of that change. After all, if you made a change to improve the funnel, you’re going to measure the impact of the change using that same funnel.

That’s a good thing.

One of the biggest mistakes in enterprise measurement (and – surprisingly – even in broader scientific contexts) is failing to commit to your measurement of success when you start an experiment. It turns out that you can nearly always find some measure that improved after an experiment. It just may not be the right measure. If folks are looking for a way to prove success, they’ll surely find it.

Since we expect our clients to use DM1 to drive store testing, we’ve tried to make it easy on both ends of the process. Tools like funnel analysis help analysts find and target areas for improvement. At the other end of the process, analysts need to be able to easily see whether changes actually generated improvement.

This isn’t just for experimentation. As an analyst, I find that one of the most common tasks I have do is compare numbers. By store. By page. By time-period. By customer segment. Comparison provides basic measurement of change and context on that change.

Which makes comparison the core capability necessary for analyzing store tests but also applicable to many analytics exercises.

Though comparison is a fundamental part of the analytic process, it’s surprising how often it’s poorly supported in bespoke analytics tools. It took many years for tools like Adobe’s Workspace to evolve – providing comprehensive comparison capabilities. Until quite recently in digital analytics, you had to export reports to Excel if you wanted to lay key digital analytic data points from different reports side-by-side.

DM1’s Comparison tool is simple. It’s not a completely flexible canvas for analysis. It just takes any analytic view DM1 provides and allows you to use it in a side-by-side comparison. Simple. But it turns out to be quite powerful in practice.

Suppose you’re running a test in Store A with Store B as a control. DM1’s comparison view lets you lay those two Stores side-by-side during the testing period and see exactly what’s different. In this view, I’ve compared two similar stores by area looking at which areas drove the most shopper conversions:

You can use ANY DM1 visualization in the Comparison. The funnel, the Store Viz or traditional reports and charts. In this view, I’ve compared the Shopper Funnel around a single merchandising category at two different stores. Not only can I see which store is more effective, I can see exactly where in the funnel the performance differences occur:

Don’t have a control store? If you’re only measuring the customer journeys in a single store or if your store is a concept store, you won’t have another store to use as a control. No problem, DM1’s comparison view lets you compare the same store across two different time periods. You can compare season over season or consecutive time periods. You don’t even have to evenly match time periods. Here I’ve compared the October Funnel to Pre-Holiday November:

Store and Date/Time are the most common type of comparison. But DM1’s comparison tool lets you compare on Segments and Metrics as well. I often want to understand how a single segment is different than other groups of visitors. By setting up a segmentation visualization, I can quickly page through a set of comparison segments while holding my target group constant. In the first screen, I’ve compared shoppers interested in Backpacks with shoppers focused on Team Gear in terms of how effective interactions with Associates are. With one click, I can do the same comparison between Women’s Jacket shoppers and Team Gear:

The ability to do this kind of comparison in the context of the visualizations is unusual AND powerful. The Comparison tool isn’t the only part of DM1 that supports comparison and contextualization. The Dashboard capability is surprisingly flexible and allows the analyst to put all sorts of different views side by side. And, of course, standard reporting tools like Charts and Table provide significant ways to do comparisons. But particularly when you want to use bespoke visualizations like Funnels and DM1’s store visualizations, having the ability to lay them side by side and quickly adjust metrics and view parameters is extraordinarily useful.

If you want to create a process of continuous improvement in the store, having measurement is THE essential component. Measurement that can help you identify and drive potential store testing opportunities. And measurement that can make understanding the real-world impact of change in all its complexity.

Visualizing the customer journey in the context of the store is the foundation for analyzing in-store data. The metrics and the store context provide a framework for translating customer measurement data into something that is immediately understandable as a shopper’s journey. But visualizing information is just the first step in making it actionable. Understanding the data is, of course, essential. But you can understand data quite well and still have no idea what to do with it. In fact, that’s a problem we see all the time with analytics. And while it’s a problem that no technology solution can solve entirely (since there are always business and organizational issues to be tackled), there are analytic and reporting techniques that can really help. We’ve built a number of them into DM1, starting with in-store funnel analytics.

The idea behind a conversion funnel is simple. The customer journey is chopped up into discrete steps based on increasing likelihood to purchase. If we analyze the journey by those discrete steps, we can work to optimize the flow from one step to the next. Improve the flow between any funnel step and the next, and the chance is excellent that you’ll improve the overall funnel conversion as well. Funnels give you a specific place to start. They let you figure out which parts of the overall customer journey are already working well and which aren’t. They let you focus on specific areas with the confidence that if you can improve performance you’ll make a significant difference. And they make it possible to easily measure success. All you have to measure is the number of people moving from one step to the next.

Funnels are THE paradigm for analytics and optimization in eCommerce. In fact, it was largely on their ability to help merchants understand and improve eCommerce funnels that digital analytics solutions first gained traction. And to this day, eCommerce testing and analytics practitioners almost always work by breaking down the customer journey into funnel steps and then working to optimize each step. While the measurement of funnels is itself interesting, I think the real value in funnel analysis is the process it supports. That ability to target specific aspects of the journey, figure out which ones are the most broken, and then test possible improvements is at the heart of so much of the continuous improvement that makes digital players successful.

One of our big goals with Digital Mortar is to bring the in-store funnel paradigm and the discipline of continuous improvement to the store. DM1 delivers on the technology and analytic part of that program.

With DM1, you can start a funnel at any place in the store and at any stage in the customer journey. But the most natural place to start is with a shopper entering the store. As you can see, DM1 lets you choose any area of the store you’ve defined and lets you pick from a range of engagement metrics.

Nearly 84 thousand shoppers entered the store in October. Since that’s where the measurement starts, this first step of the funnel doesn’t have any fallout. Everyone I measured, by definition, entered the store. It’s worth noting – and I get asked this a lot – that you CAN track pass-by traffic if you setup the measurement system appropriately. Doing so allows you to extend the funnel outside the store!

I could build a store-wide funnel, looking at conversion across the whole store. But it’s usually more interesting and actionable to focus a bit. So my funnel is going to focus on a specific section of the store – Team Gear.

Adding “Visits to Team Gear” to the funnel, I can see that around 15 thousand shoppers – about 18% of store visitors – visited Team Gear. It took the average visitor about 2 minutes before entry to reach Team Gear. Which makes sense because this area is pretty front of store

But one of the real complexities to in-store measurement is that since shoppers are navigating a physical environment they often pass-thru areas without being interested in them. That doesn’t happen much in digital.

I want to know how many people SHOPPED in Team Gear out of the folks who had the opportunity. And I can see that by selecting Lingers as my metric in the next funnel step. These last two steps illustrate a powerful metric in store measurement that’s simply never been available before. Stores have been able to measure conversion (checkouts/door entries) at the macro level, but at the area level this gets reduced to sales per square foot.

That isn’t reflective of the real opportunity a square foot provides. By measuring where shoppers actually WENT and where they SHOPPED, we have a real KPI of how well a section is performing given its opportunity.

Only about 1 in 7 shoppers who passed through Team Gear actually Shopped there. That’s a problem I’d probably want to tackle.

From here, I can add Fitting Room and CashWrap to the funnel. At every step along the way I can see how many shoppers I’m losing from the total opportunity. I can also see how much time is passing and how many stops the shopper made in-between.

In the end, I have a customer funnel for Team Gear that runs from Store Entry to Cash-Wrap that looks like this:

Any start place. Any level of engagement. Any steps in between. DM1 builds the funnels you need to support analytics and testing.

Pretty cool.

There’s no doubt in my mind that the picture of the shopper journey that DM1 provides drives better understanding. But as I said earlier, analytics isn’t improvement. It’s a way to drive improvement.

The funnel paradigm works less because of it’s analytics potential than because of the process it helps define. In-store funnels focus optimization efforts and make them easily measurable. Whether I tackle the step with the highest abandonment rate, try to build the initial opportunity, or attempt to remove distractions between key steps, funnel analysis helps guide my reasoning about what to test in the store and provides a fully baked way to measure whether store changes drove the desired behavior.

Location analytics isn’t really about where the shopper was. After all, a stream of X,Y coordinates doesn’t tell us much about the shopper. The interesting fact is what was there – in the store – where the shopper was. To answer most questions about the shopper’s experience (what they were interested in, what they might have bought but didn’t, whether they had sales help or not, and what they passed but didn’t consider), we have to understand the store. In my last post, I explained why the most common method of mapping behavior to the store – heatmaps – doesn’t work very well. Today, I’m going to tackle how DM1 does it differently and (in my humble opinion) much better.

Here are the seven requirements I listed for Store Visualization and where and why heatmaps come up short:

Designing DM1’s store visualization, I started with the idea that its core function is to represent how an area of the store is performing. Not a point. An area. That’s an important distinction. Heatmaps function rather like a camera exposure. There’s an area down there somewhere of course – but it’s only at the tiny level of the pixel. That’s great for a photograph where the smaller the pixel the better, but analytically those points are too small to be useful. Besides, store measurement isn’t like taking a picture. The smaller the pixel the more accurate the photo. But our measurement capture systems aren’t accurate enough to pinpoint a specific location in the store. Instead, they generate a location with a circle of error that, depending on the system being used, can actually be quite large. It doesn’t make a lot of sense to pretend that measurement is happening at a pixel location when the circle of error on the measurement is 5 feet across!

This got me thinking along the lines of the grid system used in classic board games I played as a kid. If you ever played those games, you know what I’m talking about. The board was a map (of the D-Day beaches or Gettysburg or all of Europe) and overlaid on the map was a (usually hexagonal) grid system that looked like this:

Units occupied grid spaces and their movement was controlled by grid spaces. The grid became the key to the game – with the map providing the underlying visual metaphor. This grid overlay is obviously artificial. Today’s first person shooter games don’t need or use anything like it, but strategy games like Civ still do. Why? Because it’s a great way to quantize spatial information about things like how far a unit can move or shoot, the distance to the enemy, the direction of an attack, the density of units in a space and much, much more.

DM1 takes this grid concept and applies it to store visualization. Picture a store:

Now lay a grid over it:

And you can take any place the shopper spends time and map it to a grid-coordinates:

And here’s where it really gets powerful. Because not only can you now map every measurement ping to a quantifiable grid space, you can attach store meta-data to the grid space in a deterministic and highly maintainable way. If we have a database that describes GridPoint P14 as being part of Customer Service on a given day, then we know exactly what a shopper saw there. Even better, by mapping actual traffic and store meta-data to grid-points, we can reliably track and trend those metrics over time. No matter how the shape or even location of a store area changes, our trends and metrics will be accurate. So if grid-point P14 is changed from Customer Service to Laptop Displays, we can still trend Customer Service traffic accurately – before, after and across the change.

That’s how DM1 works.

Here’s a look at DM1 displaying a store at the Section level:

In this case, the metric is visits and each section is color-coded to represent how much foot traffic the section got. These are fully quantified numbers. You can mouse over any area and get the exact counts and metrics for it. Not that you don’t need a separate planogram to match to the store. The understanding of what’s there is captured right along side the metric visualization. Now obviously, Section isn’t the grid level for the store. We often need to be much more fine-grained. In DM1, you can drill-down to the actual grid level to get a much more detailed view:

How detailed? As detailed as your collection system will support. We setup the grid in DM1 to match the appropriate resolution of your system. You’re not limited to drilling down, though. You can also drill up to levels above a Section. Here’s a DM1 view at the Department level:

In fact, with DM1, you have pretty much complete flexibility in how you describe the store. You can define ANY level of meta-data for each grid-point and then view it on the store. Here, for example, is where promotions were placed in the store:

DM1 also takes advantage of the Store Visualization to make it easy to compare stores – head to head or the same store over time. The Comparison views shows two stores viewed (in this example) at the Section Level and compared by Conversion Efficiency:

It takes only a glance to instantly see which Sections perform better and which worse at each store. That’s a powerful viz!

In DM1, pretty much ANY metric can be mapped on the store at ANY meta-data level. You can see visits, lingers, linger rate, avg. time, attributed conversions, exits, bounces, Associate interactions, STARs ratio, Interaction Success Rate and so much more (almost fifty metrics) – mapped to any logical level of the store; from macro-levels like Department or Floor all the way down the smallest unit of measurement your collection system can support. Best of all, you define those levels. They aren’t fixed. They’re entirely custom to the way you want to map, measure and optimize your stores.

And because DM1 keeps an historical database of the layouts and meta-data over time, it provides simple, accurate and easily intelligible trending over time.

I love the store visualization capability in DM1 and I think it’s a huge advance compared to heat-maps. As an analyst, I can tell you there’s just no comparison in terms of how useful these visualizations are. They do so much more and do it so much better that it hardly seems worth comparing them to the old way of doing things. But here it is anyway:

DM1’s store visualization is one powerful analytic hammer. But as good as they are, this type of store visualization doesn’t solve every problem. In my next post, I’ll show how DM1 uses another powerful visual paradigm for mapping and understanding the in-store funnel!

[BTW – if you want to see how DM1 Store Visualization actually works, check out these live videos of DM1 in Action]

Driving real value with analytics is much harder than people assume. Doing it well requires solving two separate, equally thorny problems. The first – fairly obvious problem – is being able to use data to deepen your understanding of important business questions. That’s what analytics is all about. The second problem is being able to use that understanding to drive business change. Affecting change is a political/operational problem that’s often every bit as difficult as doing the actual analysis. Most people have a hard time understanding what the data means and are reluctant to change without that understanding. So, giving analysts tools that help describe and contextualize the data in a way that’s easy to understand is a double-edged sword in the best of ways – it helps solves two problems. It helps the analyst use the data and it helps the analyst EXPLAIN the data to others more effectively. That’s why having a rich, powerful, UNDERSTANDABLE set of store metrics is critical to analytic success with in-store customer tracking.

Some kinds of data are very intuitive for most of us. We all understand basic demographic categories. We understand the difference between young and old. Between men and women. We live those data points on a daily basis. But behavioral data has always been more challenging. When I first started using web analytics data, the big challenge was how to make sense of a bunch of behaviors. What did it mean that someone viewed 7 pages or spent 4.5 minutes on a Website? Well, it turned out that it didn’t mean much at all. The interesting stuff in web analytics wasn’t how many pages a visitor had consumed – it was what those pages were about. It meant something to know that a visitor to a brokerage site clicked on a page about 529 accounts. It meant they had children. It meant they were interested in 529 accounts. And depending on what 529 information they chose to consume, it might indicate they were actively comparing plans or just doing early stage research. And the more content someone consumed, the more we knew about who they were and what they cared about.

Which was what we needed to optimize the experience. To personalize. To surface the right products. With the right messages. At the right time. Knowing more about the customer was the key to making analytics actionable and finding the right way to describe the behavior with data was the key to using analytics effectively.

So when it comes to in-store customer measurement, what kind of data is meaningful? What’s descriptive? What helps analysts understand? What helps drive action?

The answer, it turns out, isn’t all that different from what works in the digital realm. Just as the key to understanding a web visit turns out to be understanding the content a visitor selected and consumed, the key to understanding a store visit turns out to be understanding the store. You have to know what the shopper looked at. What was there when they stopped and lingered. What was along the corridor that they traversed but didn’t shop. You have to know the fitting room from the cash-wrap and an endcap from an aisle and you have to know what products were there. What’s more, you have to place the data in that context.

Here’s what the data from an in-store measurement collection system looks like in its raw form, frame by frame:

Time

X

Y

04:06.0

35

60

06:50.0

9

66

09:10.0

23

74

11:02.0

18

92

11:35.0

33

98

13:15.0

28

74

14:25.0

7

81

16:16.0

41

75

19:09.0

49

62

21:03.0

45

72

23:23.0

55

83

23:58.0

54

90

24:09.0

40

86

25:05.0

15

90

27:24.0

7

79

27:45.0

43

99

28:42.0

37

97

29:25.0

45

80

32:07.0

47

75

33:05.0

16

77

35:31.0

37

65

36:08.0

34

75

36:33.0

9

73

39:16.0

35

76

40:07.0

13

97

That’s a visit to a store. A little challenging to make sense of, right?

It’s our job to translate that into a journey with the necessary context to make the data useful.

That starts by mapping the data onto the store:

By overlaying the measurement frames, we can distinguish the path the user took through the store:

With simple analysis of the frames, we can figure out where and when a customer shifted from navigating the store to actually spending time. And that first place the shopper actually spends time, has special significance for understanding who they are.

In DM1, the first shopping point is marked as the DRAW. It’s where the shopper WENT FIRST in the store:

In this case, Customer Service was the Draw – indicating that this shopping visit is a return or in-store pickup. But the visit didn’t end there.

Following the journey, we can see what else the customer was exposed to and where else they actually spent time and shopped. In DM1, we capture each place the shopper spent time as a LINGER:

Lingers tell us about opportunity and interest. These are the things the shopper cared about and might have purchased.

But not every linger is created equal. In some places, the shopper might spend significantly more time – indicating a higher level of engagement. In DM1, these locations are called out on the journey as CONSIDERS:

Having multiple levels of shopper engagement lets DM1 create a more detailed picture of the shopper and a better in-store funnel. Of course, one of the keys to understanding the in-store funnel is knowing when a shopper interacts with an Associate. That’s a huge sales driver (and a huge driver – positive or negative – to customer experience). In DM1, we track the places where a shopper talked with and Associate as INTERACTIONS. They’re a key part of the journey:

Of course, you also want to know when/if a customer actually purchased. We track check-outs as CONVERSIONS – and have the ability to do that regardless of whether it’s a traditional cash-wrap or a distributed checkout environment:

Since we have the whole journey, we can also track which areas a customer shopped prior to checkout and we’ve created two measures for that. One is the area shopped directly before checkout (which is called the CONVERSION DRIVER) and the other captures every area the customer lingered prior to checkout – called ATTRIBUTED CONVERSIONS.

To use measurement effectively, you have to be able to communicate what the numbers mean. For the in-store journey, there simply isn’t a standardized way of talking about what customers did. With DM1, we’ve not only captured that data, we’ve constructed a powerful, working language (much of it borrowed from the digital realm) that describes the entire in-store funnel.

From Visits (shopper entering store), to Lingers (spending time in an area), to Consideration (deeper engagement), to Investment (Fitting Rooms, etc.), to Interactions (Associate conversations) to Conversion (checkout) along with metrics to indicate the success of each stage along the way. We’ve even created the metric language for failure points. DM1 tracks where customers Lingered and then left the store without buying (Exits) and even visits where the shopper only lingered in one location before exiting (Bounces).

Having a rich set of metrics and a powerful language for describing the customer journey may seem like utter table-stakes to folks weaned on digital analytics. But it took years for digital analytics tools to offer a mature and standardized measurement language. In-store tracking hasn’t had anything remotely similar. Most existing solutions offer two basic metrics (Visits and Dwells). That’s not enough for good analytics and it’s not a rich enough vocabulary to even begin to describe the in-store journey.

DM1 goes a huge mile down the road to fixing that problem.

[BTW – if you want to see how DM1 Store Visualization actually works, check out these live videos of DM1 in Action]

People have struggled with this (big) data provider model but Factual feels like it’s found a real (and valuable) niche. Would love to see more of this grow since external data is a huge miss in most big data systems.

Targeted VoC is a powerful (and totally neglected) tool for personalization. Facebook’s experience is entirely relevant to ANY content producer. I don’t know if I can take credit for this, but I suggested this to folks at Facebook a couple of years back!

An interesting discussion of the problems in identifying “likely” voters and the benefits of behavioral data integration. Food for thought in the enterprise world as well where the equivalent is often possible but rarely done.