Five things we’ve learned about the future of digital rights

Emerging technologies are transforming societies at their core. Endless news stories predicting that AI, automation and big data will disrupt our lives can leave us feeling like bystanders in the march towards techtopia. Yet, while change is inevitable – what this change looks like isn’t. Now is the time build the future we want. As media theorist Marshall McLuhan said: “We shape our tools and afterwards our tools shape us”. We’d better get it right.

To understand how civil society can better face these challenges, we gathered leading experts at the Stockholm Internet Forum (SIF) to debate emerging issues including cybersecurity, control of personal data and algorithmic transparency.

Here are some key takeaways:

1. When internet threatens government, government threatens the internet

Citizens need access to the internet if they are to reap the benefits of new technologies. Yet, in recent years there has been a depressing trend of internet shutdowns, with 50 cases in 2016 alone. Gbénga Sèsan from the Paradigm Initiative (Nigeria) explained that governments are increasingly using issues like hate speech and fake news as excuses to shut down the entire internet. But many question whether these motives are a cover for governments simply trying to exert an unlawful amount of control. Silencing everyone is not the answer to the challenges of hate speech and misinformation. Doing so not only tramples civil liberties but also damages the economy in affected areas.

2. Governments are hijacking online debate

Controlling access is just one way leaders control what is said and done online. Increasingly, some governments are going beyond simply participating in online debate and trying to control it using unethical means. In the Philippines, Lisa Garcia from the Foundation for Media Alternatives explained, the government enlisted an ‘army of keyboard warriors’ to spread official messages and harass critics and journalists, during the country’s election last year. Lisa emphasises the need for enhanced digital media literacy, particularly for those coming online for the first time, to enable people to critically judge online media and determine true journalism from propaganda.

3. If data is the new oil, we are the environmentalists!

We’re finding digital technologies in more and more places, not just in our laptops and smartphones, but in our cars and fridges, factories and cities, and monitoring our offices and classrooms. As these technologies become more interconnected, they surface complex ethical questions about who has access to our data, what they use it for and how their decisions affect our lives. Mozilla’s Solana Larsen says this complexity can stun us into inaction. Just as groups such as Greenpeace have helped the public understand an issue as complex as climate change, we must break down and better explain personal data issues, so people can understand and take control of their digital lives.

Malavika Jayaram, Executive Director of Digital Asia Hub, warned that as countries in the Global South roll out new technologies, they often copy the worst of Western practices, without adopting the best of Western safeguards. Focusing on Aadhaar, India’s national biometric ID system, Malavika illustrated that the system is ill-equipped to serve much of the country’s diverse population. In one example, the system could not register agricultural workers because it failed to read their worn down fingerprints, leaving the workers without access to key services. The routine exclusion of certain groups is inevitable when they are not consulted in the development of technology. Low income countries, where people are arguably most exposed to technological change, must develop institutions to protect citizens and give them a voice so they benefit, rather than suffer, from innovation.

5. Consumer choice demands transparency

Decisions about our lives are shifting from humans to algorithms. If software decides who gets a bank loan, who gets accepted to university and whether a suspect is granted parole, we need to understand how it works. Technical concerns become human rights concerns. Algorithms embody the biases of their programmers, and the data they are fed, explained Renato Rocha Souza, Professor at Universidade Federal de Minas Gerais (Brazil). And because algorithms today are black boxes, concealed from audit and critique, consumers are left with a choice, reject technologies or accept them with blind faith. Or worst still, use them without ever knowing it. Renato argues that we must fight to ensure that these algorithms are made open, and that a civil rights framework is extended to algorithms and AI.

The future

The participants were clear that civil society organisations need to be less reactive, defining what they want to see from technology, rather than reacting to bad technologies once they’re already being used. This isn’t easy as most organisations can’t match the engineering skills of big tech companies. However, we do have a role to play in bringing parties from different disciplines together to understand the impact technologies have on societies, and to help global companies think about the local impact their products have.

Technology must not be an end in itself, but a means to help us build more empowered, healthy, just societies. The Web Foundation works for world where everyone has the same rights and opportunities online and we’re fighting for a future where everyone benefits from technological innovation. To get there we need to collaborate to build new strategies, frameworks skills to ensure civil society plays a strong role in protecting citizens’ rights online and beyond.

With thanks to Lisa Garcia, Malavika Jayaram, Solana Larsen, Gbénga Sèsan and Renato Rocha Souza for sharing their insights, and to Renata Avila for chairing the panel. The full discussion is available to watch on-demand.