How AI is reshaping the legal landscape

Legal systems are struggling to keep up as Artificial Intelligence becomes commonplace.

Dr Benjamin Liu asks if the justice system is ready for Artificial Intelligence.

The University of Auckland is ranked No.1 globally in the Times Higher Education University Impact Rankings for 2020 (also no.1 in 2019). The rankings assess how universities are working towards the UN Sustainable Development Goals.

Research that responds to the challenges of the UN Sustainable Development Goals.

Artificial Intelligence has already disrupted Dr Benjamin Liu’s career once and may yet do again.

In 2012 he was working as a lawyer providing advice on financial products and markets for international law firms and banks when he had what he calls a moment of clarity.

It had been building for some time, but it was then that he realised a fundamental balance had shifted. That year more global trades were conducted by computer than by human traders. He realised then how fast the world was changing and in particular, his then chosen field.

Benjamin is now a senior lecturer in commercial law at the University of Auckland’s Faculty of Business and Economics. He gives an example of the AI induced conundrums he was considering as a financial lawyer.

Take the illegal practice known as spoofing as an example. Here a trader, who actually wants to sell shares, places a large buy order, knowing a high demand will lift the share price. When the share price jumps, he sells his shares at the increased price, and then cancels the buy order. "If the regulatory authorities detect this, they will charge the trader with market manipulation. It’s a serious financial crime because you are misleading investors."

We simply don't know if it is market manipulation if an AI does it.

Benjamin LiuUniversity of Auckland

But let’s assume a hedge fund or investment bank has designed a smart algorithm that, through machine learning, works out on its own that spoofing is one way to achieve a great trading result.

"We simply don’t know if it is market manipulation if an AI does it, and it is quite likely right now that computers are already engaging in spoofing or some other kinds of misleading market conduct," he says.

He predicts that, in the next 10 to 20 years, most dangerous, repetitive, or routine tasks will be done by robots. For those we class as white collar professionals, lawyers, accountants, and doctors, they will increasingly work side by side with digital assistants.

And human decision-makers in business, government, and even on battlefields, will be helped or replaced by algorithms based on artificial intelligence. Liu says as AI permeates our lives and begins to make decisions that affect us, it inevitably throws up a tangled web of legal issues.

"The Uber fare takes into account not only the travel time and distance but also the customer demand at the relevant time in that area. For example, if you are travelling from a wealthy neighbourhood your fare is likely to be higher than for someone travelling from a poorer part of the city because the computer 'knows' you can afford it," he says.

Paying a few extra dollars for a ride is one thing. But AI is also being used to make decisions in areas that seriously affect people's lives, such as credit scores, recruiting and promotion, medical care, crime prevention, and even criminal sentencing. While the benefits of such automated decision-making are obvious, it suffers from two serious problems, says Liu.

The first is non-transparency. Just as Google does not disclose how it ranks search results, AI system designers do not reveal what input data AI relies on, or which learning algorithms it uses. The reason is simple: such processes are considered trade secrets.

"A 2016 study in the United States showed that 'risk scores' – scores given by a computer program to predict the likelihood of defendants committing future crimes – were systematically biased against black people," says Liu.

"However, the program designer would not publicly disclose the calculations, arguing that they were proprietary. As a result, it is impossible for the risk scores to be legally challenged."

Black box problem

The second difficulty with automated decision-making goes deeper into how AI works. Many advanced AI applications use 'neural networks' – machine-learning algorithms based on the structure of human brains. While a neural network can produce accurate results, why it does so is often impossible to explain in terms of human logic, says Liu. This is commonly referred to as the black-box problem.

In response to such issues, overseas regulators have started to regulate automated decision-making. For example, one of the key features of the European General Data Protection Regulation (GDPR) is the right to explanation. "In short, if a person is being subjected to automated decision-making, that person has the right to request 'meaningful information about the logic involved'. And, individuals have the right to opt out of automated decision-making in a wide range of situations."

Without proper oversight, an AI can be as manipulative and biased as a human, says Liu. Therefore, policymakers, lawyers, and market participants need to start thinking about a regulatory framework for AI decision-making.

"Should we set up an AI watchdog to ensure that AI applications are being used in a fair way? Should each person have the right to an explanation? The answer to this last question seems, at least to me, clear."

“Without a doubt, these technologies and the resulting social and economic changes will have a profound impact on our laws and legal systems," says Liu.

The future legal landscape will be very different from the way it is today.

Data protection law will become dominant, thanks to the pervasiveness of data in our lives.

Employment law will need to be revamped. Existing law divides workers into 'employees' and 'contractors', each with different rights and responsibilities. In the future, however, an increasing number of people will participate in the 'gig' economy through companies such as Uber and Airbnb, and new laws and policies will be needed to give them appropriate protections.

The importance of the law of negligence will diminish. In the past, this law allowed consumers to seek legal redress directly from those who provided defective products or services. However, as more goods and services are based on digital technologies, consumers will find it increasingly difficult to prove fault or negligence. As a result, we are likely to see the growing use of strict liability, where legal responsibility is imposed even if there was no evidence of fault or negligence. For example, if an autonomous car causes an accident, the manufacturer will be held liable whether or not the car was defective.

The most important change, Liu adds, is that data protection law will become dominant, thanks to the pervasiveness of data in our lives. Today, more than 100 countries have established designated data protection agencies.

The Information Commissioner's Office in the United Kingdom employs more than 400 staff and deals with some 20,000 complaints each year. On the legislative front, the GDPR, which went into effect in May 2018, will affect every organisation that deals with data. Indeed, any New Zealand company conducting business online will need to ensure that it is GDPR compliant.

As to the fate of future lawyers, Liu’s view is cautionary but also slightly optimistic: "Lawyers will not be replaced by robots in the near future, and perhaps they never will be. However, one thing is certain: lawyers who do not understand and apply technologies will be replaced by those who do."

Story by Gilbert Wong

Researcher portrait by Billy Wong

The Challenge is a continuing series from the University of Auckland about
how our researchers are helping to tackle some of the world's biggest challenges.