My algorithm doesn’t fit

When technology fails to consider the human factor, regulators must protect citizens and preserve society’s values

When you stand over two meters tall you get used to things not fitting. Off-the-peg clothes, not typically. Desks in the office, rarely. And airline seats? Almost never.

Lately though it’s not just the physical things that don’t fit. Apparently because I don’t borrow money and I pay off my cards every month, my credit score is lower than the average university student. Vehicle insurance premiums vary depending on how much time I spend at work and from which email address I apply from. And if I buy two single airfares it can be cheaper than buying a return ticket…and the list goes on.

While these may be interpreted as annoying and trivial problems that we can learn to overcome by gaming the system, in practice they are evidence that algorithms are everywhere. That’s because massive amounts of data about people and their habits are now collected and analyzed, resulting in algorithms becoming a part of almost every interaction we humans have with technology. In fact, you reading this article right now is probably the result of an algorithm.

Algorithms are everywhere

Google owes their dominance of search engines to its unique algorithm. Facebook uses them to decide what news is fed to your page. Algorithms tell companies who should see their online advertising, let politicians know which voters are undecided, and guide judges when sentencing criminals.

Data fuels algorithms and it’s assumed that more data can lead to more accurate algorithms. But more data can also involve probing and influencing people’s lives in ways that raise privacy and human rights concerns. In other words, an issue for regulators.

Do algorithms reflect the values in society or those of their creator?

Organizations who use algorithms see it simply as a more efficient way to bring products and services closer to their target markets. However, as with the trends of people buying vinyl records or paper books, the way humans interact with algorithms isn’t always simple or predictable.

This is a critical consideration often overlooked during conversations about the impact of algorithms, artificial intelligence (AI), and disruptive technologies—that is, how does human behavior disrupt the technology? Consider how studies have shown that algorithms used in US criminal cases can be racially biased. Or how algorithms were most likely used to target specific voters with specific “news” in the 2016 US election, with Russian interference now being investigated as the source of that news.

How do we fit such considerations into new regulatory models and ensure that there is transparency, fairness and equality in the way that algorithms, robotics, AI, and machine learning deliver services in a diverse society? Should there be a difference in service if I use my corporate email identity over my Hotmail account? What if I don’t have a corporate email?

What this means for the Regulator

Most people assume that data use, like justice, is meant to be blind and objective. But some regulators are already thinking beyond this assumption. Article 22 in the EU’s GDPR states that individuals have “the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” In other words, if someone doesn’t like what the machine says, they can appeal and get a second opinion, this time from a human.

As more and more organizations rely on data gathering and algorithms to help them make decisions, more inequality and bias is likely to be exposed, some with serious consequences for people as they interact with financial services, health care, government, employers, or even the justice system. While I may find it amusing when Netflix incorrectly suggests a film based on my spouse’s likes that I would find painful to sit through—it can often be no laughing matter when an algorithm fails to read the nuances of human behaviour.

To reiterate a key theme from my previous posts, regulators and technology companies must work together to help address these problems. The new GDPR framework is an excellent example of a debate around the use of data that needs to be extended to the use of algorithms. We need to be in front of this issue through honest dialogue between businesses, citizens, and regulators alike. Because, as Alibaba founder and Executive Chairman Jack Ma noted at Davos, “The computer will always be smarter than you are; they never forget, they never get angry. But computers can never be as wise.”

If at first you don't find a balanced regulatory model - try, try, again

Much of the history of regulation is one of reaction rather than pro-action. An example: In the early days of the automobile, a spate of pedestrian deaths prompted a campaign that eventually led to the first rules against jay-walking, ensuring that between intersections at least, the roads belonged first and foremost to cars.

Other examples abound. From health care to banking, from the military to education, regulations are almost always the product of something gone wrong (or the anticipation thereof) and the government’s response to it.

As a model for bringing in regulations to help keep people and their assets safe, this one has been largely effective. However, in the future, regulators may no longer be able to rely on this reactive approach, especially as faster, more scalable technologies are leading to more aero-dynamic business eco-systems, making it more difficult for regulators to keep up.

Playing catch up

Regulators are often faced with the challenge of having to catch up to new disruptive businesses. Uber and AirBnB are prominent examples. In their relatively short lives, they have had major impacts on the transit and hospitality industries in cities around the world. The speed of their entry into these markets has been a challenge for regulators who have already established frameworks with more traditional providers of transit (i.e., taxis) and hospitality (i.e., hotels).

Regulators have scrambled to respond—and sometimes that response has been severe. In Copenhagen regulators mandated that taxis should have seat sensors, video surveillance, and taxi meters, all regular features of traditional taxis but less common in the cars belonging to Uber drivers. Uber responded by pulling out of the market. While Danish taxi drivers are pleased, Uber drivers and their customers miss out.

A more nuanced approach

A more effective regulatory model may be found when the innovators and regulators find a balance. For example, in Portugal an innovative compromise was struck when Uber agreed to use the shared mobility platform of the city in return for cooperation on licensing. Perhaps coincidentally (or perhaps not) Uber recently announced that Lisbon would be the site of its Center for Excellence in Europe, creating jobs and revenue for the city.

Blunt instruments may not be necessary if different stakeholders—tax authorities, zoning boards, by-law enforcement—work together to envision and respond to disruptive change when it comes. In fact, some agencies are already taking a pro-active approach by issuing statements that anticipate where new technologies may run up against existing regulations. This will help guide entrepreneurs as they contemplate entering a marketplace.

You can’t predict the future, but best to be ready for it

Anticipating all the variables of a disruptive technology is about as easy as predicting the future. That is to say, not very easy at all. However, regulators can help ease the process by thinking more dynamically and maintaining open and frank dialogue with entrepreneurial tech companies.

The features that gave these disruptive technologies such massive impact—speed and scalability—will be the features of new technologies for years to come, especially as the intersection of artificial intelligence, advanced analytics, and human behavior becomes more prominent in our day-to-day lives. It is vital that regulators and entrepreneurs open up the channels of communication now so that both sides are ready for the future and whatever it brings.

Keep calm and regulate: How disruptive technologies are disrupting regulators

Driverless automobiles—we’re all excited for their arrival and the day when a long drive means catching up on work or taking a nap. Not surprisingly, auto makers and technology companies are excited, too, and are well down the path to presenting a viable driverless vehicle.

But before you start making a list of television series to binge watch in your car, know this: the excitement is very likely premature—not because of the technology, but because regulators are still trying to fully understand the implications of driverless cars and struggling to define new ways to address these innovations.

This is a problem—not just for driverless vehicles, but for all business models that depend on the cooperation of regulators. Virtually everything—from artificial intelligence and 3D printing, to sharing economy services such as AirBnB and Uber, and even to applications we’ve yet to imagine will need to bridge this gap.

Too fast, too soon

The disconnect between regulators and tech innovators is the result of a few things. One is speed. From development to implementation, technology helps products grow and hit markets at incredible speeds, often too fast for regulations to keep up.

Another is a lack of constructive dialogue between the two parties. Innovators tend to talk to other innovators, which is great for innovation. It’s not so great, though, when those products have to interact with people and societies—the very same people governments and regulators are tasked with protecting.

Driverless cars might be the perfect illustration of the regulatory disconnect. On the one hand, these vehicles will have four wheels and engines and in that sense, existing rules for automobile regulation­—from emissions and safety to traffic laws—still apply. What creates challenges for regulators is what’s different about driverless cars.

For example: before they are ready for the road, traditional cars must be thoroughly tested. Do the brakes, steering, lights, etc. work properly? But with driverless vehicles, software will do the braking and steering and lighting. As yet, there is no testing for this outside the technology company itself.

Driverless vehicles will also have to satisfy regulators when it comes to interacting with cars driven by human beings. In some countries, a stop sign is merely a suggestion: drivers slow down but usually drive straight through. Will the software be able adjust to these different driving cultures? Complications like these explain why a manager from one large North American city recently told me that, while manufacturers see driverless cars on the road in five years, he can’t see regulations being in place for at least ten or more.

Get talking

Early dialogue addressing the concerns of regulators before problems arise will be critical to the future of regulation. In some ways, none of this is new. Regulators have been dealing with disruptive technologies for decades. What is new is the sheer speed and scalability. Today’s companies grow at extraordinary speed while regulators work at the same pace as always.

So the question is: how do regulators and innovators work together to close the gap? Over the next few months in this space, we’ll examine this issue more closely and look at a few solutions. We’ll explore some existing models and how regulators might adapt them for a future marked by disruption. We’ll show how regulators are becoming more networked. We’ll delve into the ongoing problem of human nature interacting with technology. And finally, we’ll look at how some jurisdictions are using regulations as a competitive advantage.

The thrill of new technology is undeniable – the world shares a common interest in seeing how these innovations will interact with and benefit communities across the globe. As technology drives forward, it’s critical to pause and consider the enormous responsibility government has in ensuring the safety and inclusivity of its citizens. Elon Musk’s outspoken position on AI highlights the need for regulators to stay focused on the long game—because while the role of the innovators is to disrupt, the role of the regulator is to find the balance between innovation and social responsibility.

Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee (“DTTL”), its network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent entities. DTTL (also referred to as “Deloitte Global”) does not provide services to clients. Please see www.deloitte.com/about to learn more about our global network of member firms.