Share

Copy the link

In the age of targeted advertising and algorithmic discrimination, digitalisation has come to pose significant risks to our most fundamental civil liberties. In the words of Sir Tim Berners-Lee, the challenge is therefore not only to get the other half of the world connected, but also to ensure that the rest of the world wants to connect to the web we have today. Could a set of universal digital rights be the only viable way out of this 21st century dilemma?

Released days prior to the Cambridge Analytica revelations, Sir Tim Berners-Lee’s annual letter took on a particular resonance this year. As the inventor of the web celebrated the 29th anniversary of its creation, the fact that personal data could be leveraged not only for commercial but also political purposes became clear to millions. In the words of Sir Tim, the power concentrated among a handful of dominant platform companies had made it possible to weaponise the web at scale, and we could not count on big tech to stop it.

Harnessing the benefits of digitalisation, while managing the risks it poses to our most fundamental civil liberties, has come to constitute one of the greatest regulatory challenges of our time. The Forum session Universal Digital Rights & Digital Inclusion hence explored the pathways to be taken to forestall a Black Mirror-like dystopia, and ensure that digitalisation remains a force for good. Indeed, Rebecca MacKinnon, Director of Ranking Digital Rights, stressed from the start that digital rights are not a set of special rights, but rather an extension of human rights into the digital realm. And in this regard, concerns appear to have grown ever more pertinent.

Waking up to digital woes

Data-driven innovation forms a key pillar in 21st century sources of growth, and is now surpassing physical trade as the connective tissue of the global economy. But for users of digital services, the monetisation of their personal data has often become the hefty price to pay for free access to digital services. In the words of Julia Hobsbawm, Author of Fully Connected: Surviving and Thriving in an Age of Overload, it should not be forgotten that databases are really “people-bases”. And Philip J. Jennings, General Secretary, UNI Global Union, stressed that when an entity possesses the digital profiles of millions of people – profiles which can be used to manipulate their judgements and impulses – we have something to worry about.

Important initiatives, such as the Ranking Digital Rights Index, foster greater transparency in the practices of leading players. When assessing the policies and practices of the world’s most powerful internet, mobile and telecommunications companies, Ms. MacKinnon’s index found, however, that the best performers only scored a “D”. Yet, it would be a mistake to draw the conclusion that human rights are doomed to be sacrificed on the altar of technological progress. On the contrary, panelists gave grounds for hope, suggesting important pathways toward a digital future more respectful of civil liberties.

“Move fast and break things”: waking up to the consequences

If digital technologies reshape society, it should first be noted that technologies can themselves be shaped so as to induce innovation in a direction beneficial to society. Ms. Hobsbawm diagnosed an inherent set of contradictions in our current internet era, where something which is structurally democratic and free has nevertheless become a monopolistic vehicle. However, the author of Fully Connected stresses that technology should not be demonised.

To curtail human rights abuses in the digital realm, it is necessary to first look at the culprits and – sometimes inadvertent – enablers. One does not need to believe in a Silicon Valley-run conspiracy to acknowledge that humans make mistakes. Whilst the construction of a road or building is subject to environmental impact assessments, no such procedures are in place when it comes to digital infrastructures. As Ms. MacKinnon observed, entrepreneurs and CEOs start out with good intentions in their early days, but often fail to anticipate the potential abuses of the services they provide. Ms. Hobsbawn concurred, arguing that Mark Zuckerberg was probably sincere when he claimed he did not know how powerful Facebook’s advertising was going to be. Yet, now that this risk is better understood, it is clear that complacency is no longer an option. In the words of Ms. MacKinnon, we may have gotten here without thinking, but we can now do something about it.

Tech remedies to digital woes

Part of the solution may lie in the design of our information technology services. Targeted advertising – and the business models and data collection processes built around it – has emerged as the primary fuel of our digital economies. In the process, it has however enabled private interests and malicious actors to surreptitiously shape our behaviours on an unprecedented scale. For our information ecosystem to remain compatible with the kind of fair and democratic society citizens want, moving our business models away from targeted advertising may therefore prove indispensable. Is such a scenario realistic? Sir Tim himself appears to think so, and has already started developing products to “re-decentralise” the Web by making use of peer-to-peer connectivity and blockchain technology.

Whether such solutions manage to take off in the face of powerful network effects remains to be seen. As the Forum explored, there may also be instances, notably in healthcare, where our data can form part of a public good and should thus be harnessed in a respectful manner for the benefit of society as a whole.

Thankfully, other technological solutions more suited to the shorter term have already emerged. The panelist Rand Hindi, who founded Snips, a company developing AI-powered voice assistants, contends that the growing demand for privacy makes it possible for newly created companies to build their business models around technologies guaranteeing privacy-by-design. Whilst acknowledging the existence of a trade-off between convenience and privacy when it comes to digital services, the entrepreneur argues that it would be a mistake to believe that a choice is to be made between privacy and efficiency. A point he strives to prove through his own company, which has developed ways to train its algorithms with fake data, thereby avoiding the need to collect users’ private data to develop powerful AI software.

Toward a cultural shift?

Beyond privacy-by-design, the ability of technological fixes to remedy digital woes may ultimately depend on the capacity of innovators to embrace “ethics-by-design”. Influential voices such as Azeem Azhar, the founder of Exponential View, note that digital technologies wield unprecedented influence over societal changes due to their scale and speed. As a result, innovators can no longer afford to simply apologise for the unintended consequences of their creations, and truly need to become conscious of the potential effects of the services they design. In the words of Ms. Hobsbawm, there is a dire need for a cultural shift. A cultural shift which, fortunately, may well be underway.

The General Data Protection Regulation (GDPR) constituted a tremendous leap forward in giving control to individuals over their personal data. And whilst it remains to be seen how smoothly some of its provisions intersect with the U.S. Cloud Act, its role as a driving force for change around the world should not be underestimated. Indeed, a number of countries acknowledge it as an important source of inspiration for the updating of their own national privacy laws, while none other than leading tech actors such as Apple and Facebook have come to call for EU-style privacy laws in the United States.

Regulating the digital world: the journey that awaits

For all its merits, it is clear that within the regulatory framework needed for our increasingly digital world, the GDPR remains nevertheless a milestone, not a finishing line. As Sandra Watcher and Brent Mittelstadt from the Oxford Internet Institute note, the greatest risks of Artificial Intelligence and Big Data analytics do not stem from how private information is used, but rather from the inferences that are drawn about us from the collected data. Inferences which may help predict flu outbreaks or Alzheimer’s disease from search engine interactions, but also impact on our private lives, identities, reputations, and self-determination. Yet, the researchers find that current data protection laws in Europe fail to protect data subjects from what may constitute the greatest risks in terms of privacy and discrimination.

Whether by reproducing the unconscious bias of their developers or learning particular social preferences from biased data, decision-making algorithms often reflect and amplify pre-existing real-life discriminations under a veil of objectiveness. From predictive justice excessively flagging black defendants as future criminals to the “digital poorhouses” documented by Virginia Eubanks in Automating Inequalities, technology is too often used to profile, police and punish the populations in need of support. Ms Eubanks further warns that the quote “the future is already here – it's just not very evenly distributed” may well be true, but not for the reasons its author William Gibson envisioned: rather than observing the wealthiest people gaining access to the latest technologies first, she cautions that our digital future may well resemble what is already being experimented on marginalised communities, who have little choice but to give away their privacy and accept the rule of algorithms biased against them, if they are to gain access to indispensable resources.

The stakes are high, and in the words of Sir Tim, the challenge is therefore not only to get the other half of the world connected, but also to ensure that the rest of the world wants to connect to the web we have today. To this end, the web’s inventor recently launched a call for an online "Magna Carta" to be supported by public institutions, government officials and corporations alike. For all its pitfalls, connectivity has indeed become an essential dimension of modern life. To be offline today is to be excluded from opportunities to learn and earn, to access valuable services and to participate in democratic debates. And whilst digitalisation may exacerbate some existing inequalities, so would the perpetuation of the digital divide. For the web to remain a public good which connects and serves Humanity, a set of universal digital rights may thus well be the only viable way out of this 21st century dilemma.

Watch the Universal Digital Rights & Digital Inclusion webcast again and see photos from the session

3 Comments

Excellent article. Well presented, good information. Thank you. As the owner of a medium large web site, my experience is that there is a missed angle, though. In order to get new rules accepted, citizens must realise what the issue is. By and large, my web site members don't. They are not a representative group, but their attitude is mirrored in other anecdotes. "Aw, I can't give up Facebook; I am having too much fun with it; it can't be that bad; anyway, I have nothing to hide". OECD is well placed to support the efforts of its member governments to wake people up to the problem.

Many thanks for your comment Peter. You are absolutely right, much more could be said on the topic but, as the saying goes, “he that too much embraceth, holds little”! Having said this, you do make a very important point. Digital exposure should not be mistaken with digital wisdom, and there is a growing realization that even “digital natives” are often really “digital naive”. Fostering both digital literacy and critical thinking are key areas of work for the OECD, and we very much welcome your support in this regard. In light of your experience as a website owner, how would you go about raising awareness about these issues?

You probably no more about how to raise awareness than I do, but I'll take the bait anyway ;-)

An appropriate media and an appropriate message. The media is obvious: the internet. That's where the target audience is. It is easily supported by paper publicity with a QR scan code.

I think the message should be fun, playful, yet scary. I would argue that showing people from the target group as victims would work well. A very good model is here:

The slightly scary figure of the "mind reader" should not be connected to a nationality, though. Make it a space alien. That way, he can become an icon every OECD member-state can use. The background figures in balaclavas are just perfect. The message is: this is evil and it can happen to you. It can't be laughed off.