Identifying quality in data-led media

by Calvin Cain

In December 2016, IBM reported that more data had been created in the past two years than in the entire history of the human race.

At the time, this universe equated to an inconceivable 4.4 zettabytes (or 4.4 trillion gigabytes) of digitally stored information. By 2020, just three short years away, it is predicted 1.7 megabytes of new information will be created for each human being on the planet every second, expanding this data universe 10-fold.

The exponential growth of connected devices, along with advancements in data capture and storage, are heavily responsible for the significant increase in data creation. Predictions surrounding the growth in number of these devices over the next five years range wildly from 21 billion to over 50 billion. In other words, an average of three to six devices per living human.

With this increase in audience connectivity, we’ve also seen a shift in power. Where once the battle lines for customer attention were drawn between the likes of TV or radio station rivals, they now compete with companies such as Spotify, Facebook or an app as simple as Flappy Bird or Candy Crush.

Since the ‘mobile revolution’ began (around 2000), our ability to multitask has significantly improved; albeit at the expense of our attention span. Research affirms TV is still our number one source of entertainment, however, over 42% of Australians also admit to browsing the internet at the same time. As a result, our average attention span has fallen from 12 seconds to just eight – that’s one second shy of a goldfish’s.

What it means for media

Audiences are now in much greater control of what media they consume, along with when, and how they consume it. Put simply, if you are not relevant, or entertaining, you will be ignored.

Understanding and maneuvering with our audiences is therefore pivotal and, when it comes to understanding your audience, first-party data is king.

Pending its format and depth, first-party data can unlock the common traits between your audience sub-segments, which are scalable and/or addressable across your media executions. In its absence, however, what other options do we have? And how do we determine the quality of those options?

Second-party data (someone else’s first-party data) is widely perceived as the next best option for activating against audience traits. In certain instances, there may be value simply in the pre-qualification of that specific partner’s audience. However, the quality and depth of the data should be analysed to the same extent as you would a third-party source, like a data aggregator. Factors such as the collection methodology, segmentation process, activation method and how recent it is, are all important contributors to data quality.

It is rare to find a digital partner today that doesn’t offer a selection of ‘interest’ or ‘intent’ segments. Unfortunately, it is quite common for partners not to be able, or willing, to clearly define what criteria a user must meet to be placed into either segment type. To counter this, you should be looking for user data which falls squarely within the average sales cycle of the product or service. As a general guide, the lower involvement a transaction requires, the shorter the data segment’s lifespan should be.

Targeting capabilities vary significantly across different media channels and partners, due to the indicators through which data can be appended.

These indicators are the traits that form a relationship between the data set and the media channel, with their strength depending on how closely the indicators relate to the collection methodology.

Accuracy isn’t the be all

Additionally, it would be logical to assume that the greater transparency we can achieve, and the more factors we can verify in relation to our data options, the more we should be willing to pay for these audiences. Alternatively, I suggest that the objective you are trying to achieve should be a greater point of focus.

If your product is strictly limited to a particular audience, an inaccurate data set will generate wastage. However, if the price of a more refined data set diminishes your return on investment, is accuracy really better?

If out-of-audience exposure doesn’t present any identifiable detriment to your brand, the least refined data set may actually end up being most valuable one for your campaign.

Understanding the factors that identify how data-led audiences are created is critical for assessing what data is right for your activity. Media partners are often quick to add new data sets to their channels, but this can sometimes be at the cost of accuracy. By questioning and understanding the how, who, what and when of audience segmentation, we can make better decisions as to what data will drive the best possible outcome.

Calvin Cain is Head of Experience – Media at CHE Proximity. You can find out more about him on LinkedIn.