Data goes nuclear: the meaning of half-life in the world of e-commerce

While many retailers are prepared to deal with the high volume of data they can collect today online, not all are nimble enough to react quickly enough to urgent data—such as when a competitor lowers a price, or goes out of stock on a popular product.

First came the rush to build out e-commerce channels. Then it was all about unifying the customer experience across channels. The mobile revolution followed right after, and now it’s all about amassing every piece of data that's out there, and trying to make sense of it—a.k.a. big data.

While most retailers are preparing to handle the typical challenges that come with big data—namely, the volume and variety of data—not many are nimble enough to react to the insights derived from the data in a timely manner, nor are they preparing to become so.

One important aspect that often gets overlooked is the fact that not all data is created equal. In this day and age of warp speed e-retailing, information loses value with every passing minute—hence the notion of half-life for data in the world of e-commerce. In this context, getting to know about a flash sale by your competitor a day later, or even a few hours later, could prove very costly. Every hour that passes directly translates into loss of revenue, not to mention the increase in customer’s perception that your competitor is perhaps offering better value.

Online-only retailers like Amazon change prices several times a day for certain products. If you’re trying to benchmark against Amazon on a weekly or daily basis, you’ve already missed the boat. Analyzing these price changes a day later and then trying to figure out your response does you no good either. The information half-life for prices, products and promotions in this cut-throat world of e–retailing is now being measured in hours, if not in minutes.

The technology to handle this type of situation is already out there. Let’s look at the financial industry. Stock-trading platforms operate on huge volumes of data—historical and current, structured and unstructured (e.g., news feeds, weather etc.) —in real time, and process complex rules to determine trading instructions. Retailers will need to look beyond aggregating large volumes of data and mining it in an offline mode.

First, it’s important to classify data based on its half-life value, and adopt different (complementary) technology strategies to handle it. For example, competitor pricing data can be continually streamed through an in-memory database, which can be connected to a dynamic rules engine that is programmed to handle what-if scenarios such as a competitor going out of stock, or dropping prices, or a glut in your inventory, in real time. The competitor price data, and the price recommendation, can then make its way to the big-data repository for subsequent offline analysis.

Second, retailers need to adopt an event-driven architecture in order to connect the output of their analysis systems (e.g., price recommendations from the rules engine or insights from the big-data platform) with other systems, such as e-commerce and store point-of-sale systems in a timely, seamless manner. Pricing analysts and category managers can then focus their energies on fine-tuning the pricing strategies instead of rushing to update price changes manually. There are several commercial and open source options that deliver these capabilities, not to mention software-as-a-service providers who can fill in certain capabilities with their pluggable offerings.

Retailers who (a) recognize the concept of information half-life and start laying down different information pathways (or highways) depending on the nature of the data, and (b) establish a near real-time, event-driven, closed-loop system with their execution systems, will unlock the full potential of the value buried in their data repositories.

Ugam aims to help brands and retailers improve category performance with insights and analytics solutions around assortment, pricing and product content.