Want tips on pitching?

WFA MD: Decision time on big data

Regulators worldwide are setting new rules to protect consumers' interests in the age of big data. We can't simply reapply old solutions, says Stephan Loerke.

It's not often you can truly say that we are at a tipping point. But in the case of privacy legislation and the use of big data, that's where we are.

Regulators in the European Union, the US, Singapore and Australia are currently reviewing privacy legislation across the board. This is mostly driven by the belief that existing rules are no longer suited to privacy needs in the digital age. The outcome of this work will determine the future of the digital economy – and with it how marketers can connect with their consumers.

It is true that data flows have multiplied in size and intensity to unprecedented levels. The data market is already big business. Figures from Boston Consulting Group show that the use of personal data could be worth as much as €1tn to the European economy – 8% of GDP for the region. If anything, this figure will continue to grow. Data is the lifeblood of the digital economy.

It is right that privacy legislation should be updated to reflect this new reality. But it is crucial that this is done in a way that truly increases consumer trust in the safety and reliability of using digital technology without needlessly undermining the digital economy.

At the heart of this conundrum is the question of what types of data should be covered by the new rules. The approach that pressure groups and some regulators (such as the European Commission) are taking risks leading us down the wrong path. An IP address or a cookie contain data that relates to a specific user, but they cannot be subject to the same rules that today apply tos data about someone's medical history or sexual orientation. That would be totally disproportionate: cutting off the flow of data that feeds the digital economy with marginal privacy benefits at best.

The temptation to treat all data that somehow relates to a person in the same way is driven by the misguided belief that online service providers and digital marketers are actually interested in truly personal data. This is not the case: what matters to the digital marketer is not what the user is called, what his address is or what he looks like.

What matters to the digital marketer – and to the functioning of the digital economy as we know it – is to know what a user cares for and what he doesn't, so that he can be engaged with in a manner that is relevant, as opposed to intrusive or annoying. After all, the ability to cut down on the famous “wasted half” of marketing spend is what the promise of digital marketing is all about.

Digital marketing and a whole range of digital services rely to a large extent on anonymised and “pseudonymous” data: the kind of data that relates to an internet user, but doesn't tell you his name or street address. You would have to go to great lengths and expense to find this out from online identifiers – if at all possible.

When regulating the flow of this kind of data, proportionality is key. While recognizing the fundamental right to privacy, we must adopt a risk-based approach: what is the risk that an individual is identified as such on the basis of this kind of data, given that this is not in the interest of those collecting the data? What is the actual risk of harm to the individual on this basis? What, on balance, is the value added of restricting these data flows in terms of enhanced consumer protection compared with the impact this would have on the digital economy?

If we apply this risk-based approach to regulatory proposals currently on the table, it becomes clear that we need to enshrine two key principles.

First, not all data should be treated as equal – anonymised and pseudonymous data should not be treated in the same way as truly personal or even sensitive data. Cookies, for example, usually don't contain much information except for the url of the website that created the cookie, the duration of the its abilities and effects, and a random number. By and large, they therefore cannot be used to reveal your identity or personally identifying information.

Second, the type of user “consent” that is required should reflect the type of data in question: explicit, prior consent is certainly desirable when a user is asked to disclose his medical records, say. But it makes no sense to ask for the same type of consent for a cookie. The user will be put off (research shows that less than a quarter of people will approve cookies if asked for prior consent), and since cookies are essential to the functioning of the web as we know it, the user experience will suffer tremendously.

What matters regarding cookies and other online identifiers is informed consumer choice. This is the approach industry has taken when it comes to interest-based advertising, also known as Online Behavioural Advertising (OBA). Under the American and European self-regulatory systems, these ads carry a standardised icon, which alerts users to the fact that this is OBA and provides them with a simple mechanism to exercise control (www.aboutads.info/choices and www.youronlinechoices.eu). Here the user can learn more about OBA and choose whether he wants to continue receiving this kind of targeted advertising or not. Industry is now rolling this system out to other markets.

As work continues on defining a global “Do Not Track” standard for web browsers, it is becoming clear that such a DNT mechanism – if done right – could also be part of the options allowing consumers to exercise informed choices.

Brand owners have a central stake in getting this right. Hyper-transparency in the digital age means that brands are easily held to account. Consumer trust is at stake with every cookie that is dropped. And without consumer trust, a brand is not worth an awful lot.

For brands it is therefore crucial to have data protection laws in place that effectively protect consumers from genuine privacy risks and reassure them that adequate safeguards are in place. We need to achieve this through a calibrated risk-based approach, not the simplistic “one size fits all” rules that are being put forward by pressure groups and some regulators.

The danger is that if we get it wrong, not only will consumers lack effective protections, but online advertising will be thrown back a decade or more. This would not be good news for anyone. According to a recent McKinsey study, advertising has fuelled about 15% of growth in GDP in the major G20 economies over the past decade. Interactive advertising has contributed disproportionately to that growth, and is responsible for $300 billion of economic activity in the U.S alone. For every euro spent on an online ad, consumers get three euros worth of services.

Hampering the digital economy, a sector with almost unparalleled growth potential, in the name of highly questionable privacy benefits is not a good idea. What brands and consumers alike need is a legal framework that ensures people can trust that their privacy is protected, empowers informed choice and does not needlessly undermine the vast benefits of the digital economy. Now more than ever, we all need to help make sure that regulators worldwide get this right.