The new digital divide is between people who opt out of algorithms and people who don’t

By Anjana Susarla

Apr 17, 2019

Every aspect of life can be guided by artificial intelligence algorithms -- from choosing what route to take for your morning commute, to deciding whom to take on a date, to complex legal and judicial matters such as predictive policing.

Big tech companies like Google and Facebook use AI to obtain insights on their gargantuan trove of detailed customer data. This allows them monetize users’ collective preferences through practices such as micro-targeting, a strategy used by advertisers to narrowly target specific sets of users.

In parallel, many people now trust platforms and algorithms more than their own governments and civic society. An October 2018 study suggested that people demonstrate “algorithm appreciation,” to the extent that they would rely on advice more when they think it is from an algorithm than from a human.

In the past, technology experts have worried about a “digital divide” between those who could access computers and the internet and those who could not. Households with less access to digital technologies are at a disadvantage in their ability to earn money and accumulate skills.

But, as digital devices proliferate, the divide is no longer just about access. How do people deal with information overload and the plethora of algorithmic decisions that permeate every aspect of their lives?

The savvier users are navigating away from devices and becoming aware about how algorithms affect their lives. Meanwhile, consumers who have less information are relying even more on algorithms to guide their decisions.

AI algorithms take in data, fit them to a mathematical model and put out a prediction, ranging from what songs you might enjoy to how many years someone should spend in jail. These models are developed and tweaked based on past data and the success of previous models. Most people -- even sometimes the algorithm designers themselves -- do not really know what goes inside the model.

Other studies have shown that judicial algorithms are racially biased, sentencing poor black defendants for longer than others.

As part of the recently approved General Data Protection Regulation in the European Union, people have “a right to explanation” of the criteria that algorithms use in their decisions. This legislation treats the process of algorithmic decision-making like a recipe book. The thinking goes that if you understand the recipe, you can understand how the algorithm affects your life.

Meanwhile, some AI researchers have pushed for algorithms that are fair, accountable and transparent, as well as interpretable, meaning that they should arrive at their decisions through processes that humans can understand and trust.

What effect will transparency have? In one study, students were graded by an algorithm and offered different levels of explanation about how their peers’ scores were adjusted to get to a final grade. The students with more transparent explanations actually trusted the algorithm less. This, again, suggests a digital divide: Algorithmic awareness does not lead to more confidence in the system.

But transparency is not a panacea. Even when an algorithm’s overall process is sketched out, the details may still be too complex for users to comprehend. Transparency will help only users who are sophisticated enough to grasp the intricacies of algorithms.

For example, in 2014, Ben Bernanke, the former chair of the Federal Reserve, was initially denied a mortgage refinance by an automated system. Most individuals who are applying for such a mortgage refinance would not understand how algorithms might determine their creditworthiness.

Opting out of the new information ecosystem

While algorithms influence so much of people’s lives, only a tiny fraction of participants are sophisticated enough to fully engage in how algorithms affect their life.

There are not many statistics about the number of people who are algorithm aware. Studies have found evidence of algorithmic anxiety, leading to a deep imbalance of power between platforms that deploy algorithms and the users who depend on them.

A study of Facebook usage found that when participants were made aware of Facebook’s algorithm for curating news feeds, about 83% of participants modified their behavior to try to take advantage of the algorithm, while around 10% decreased their usage of Facebook.

A November 2018 report from the Pew Research Center found that a broad majority of the public had significant concerns about the use of algorithms for particular uses. It found that 66% thought it would not be fair for algorithms to calculate personal finance scores, while 57% said the same about automated resume screening.

A small fraction of individuals exercise some control over how algorithms use their personal data. For example, the Hu-Manity platform allows users an option to control how much of their data is collected. Online encyclopedia Everipedia offers users the ability to be a stakeholder in the process of curation, which means that users can also control how information is aggregated and presented to them.

However, a vast majority of platforms do not provide either such flexibility to their end users or the right to choose how the algorithm uses their preferences in curating their news feed or in recommending them content. If there are options, users may not know about them. About 74% of Facebook’s users said in a survey that they were not aware of how the platform characterizes their personal interests.

In my view, the new digital literacy is not using a computer or being on the internet, but understanding and evaluating the consequences of an always-plugged-in lifestyle.

Opting out from algorithmic curation is a luxury – and could one day be a symbol of affluence available to only a select few. The question is then what the measurable harms will be for those on the wrong side of the digital divide.