We are creating a society where all-seeing intelligent computer code can punish every wrongdoer – but do we want it?

JOHN GASS was certain he'd done nothing wrong. Yet there it was, in black and white: the Massachusetts Registry of Motor Vehicles had revoked his driving licence.

It took him nearly two weeks, numerous calls to the registry and a court hearing to put things right. For the 41-year-old it was a nightmare, threatening his livelihood as a professional driver. For the registry it was clear: Gass had committed fraud by applying for more than one driver's licence – caught thanks to a facial-recognition algorithm.

In Massachusetts, and most other US states, the headshots on millions of licences are scanned routinely to spot criminals, underage drivers, people using fake names, and those suspended from driving. Yet in Gass's case, originally reported by theBoston Globe, the computer had made a mistake. It was policing-by-algorithm gone wrong.

Over the past few years, law enforcement agencies have begun replacing human police officers with efficient, all-seeing, algorithms. They watch for crimes using ubiquitous sensors, cameras, facial-recognition software and intelligent computerised analysis. From traffic offences to theft, increasingly it's an algorithm watching out: alerting police, or even dispatching punishment with no human oversight whatsoever.

Advocates argue that automated policing cuts cost, frees up resources and ensures wrongdoers do not escape justice. Yet many lawyers and computer scientists warn that we may not want to live in the algorithmically enforced world we're headed for. Not only does it do away with key principles of civilised societies – such as discretion – it could also force us to change our behaviour in undesirable ways.

While the sophistication of automated policing has accelerated in the last few years, its roots can be traced back decades. It began with traffic policing. The first automatic licence-plate recognition cameras appeared in the mid-1970s in the UK. By 1979, trials were running on the country's longest road – the A1 – and at the Dartford Tunnel in London. Two years later, British police made their first arrest based on a plate-recognition camera spotting a stolen car.

Cameras to monitor traffic lights, bus lanes and more followed worldwide. Gradually, they were connected via the internet to processing centres, which allowed rapid access to police and vehicle-registration databases. From 2008 at Schiphol Amsterdam Airport in the Netherlands, for instance, cameras began to routinely check all incoming cars so that authorities could be rapidly notified if known offenders were fleeing the country.

The most significant change in recent years, though, has been improvement in the intelligence of the algorithms behind the camera lens. For example, many speed cameras can now pick out newcomers to the area. These drivers can sometimes escape without a fine if they are only a little over the speed limit, according to Steve Wright, who specialises in policing technology at Leeds Metropolitan University in the UK. Local people spotted passing that way before, by contrast, get stricter treatment.

The processing centres have also become highly automated, dealing with thousands of offences per day. Some authorities in Europe have dispensed with human oversight altogether, says Ernst Koelman, spokesman for the Dutch Traffic Enforcement Team at the National Public Prosecutor's Office, based in Utrecht. If a roadside camera in the Netherlands is satisfied that it has captured an offending driver's number plate, says Koelman, this data is often automatically sent to the Dutch Central Fine Collection Agency, part of the Department of Justice. There, algorithms use the number plate to search for the name and address of the car owner, before calculating a fine and readying a letter. "If there are no doubts by the system, no human being will see the letter sent out," he says.

The scope has now widened beyond traffic misdemeanours. Increasingly, algorithms are capable of making sense of the human behaviour on CCTV. At some UK car parks, the algorithms embedded in security systems are watching for car thieves. They deduce that if you walk directly to a car and drive off, you are the owner. But if somebody hides in the shadows or zigzags through the parking lot checking out cars, then something fishy is going on. Algorithms are also being tested that can identify – in real time – faces in a crowd, people with a particular gait, or suspicious packages.

(Image: Vittorio Zunino Celotto/Getty Images)

"A human can probably monitor four, maybe five CCTV cameras simultaneously, for probably 20 to 30 minutes before their attention starts to slip," says Ron Fellows of IBM, which is one of several companies developing smart software for policing and antiterrorism. "Our CCTV technology is programmed to look at what people and things are doing in a particular area; it can spot not only that somebody is moving through that area, but also whether they put down a package and then walk off."

IBM software now employed in various US and UK cities can also forecast the location of crimes. "Predictive policing" systems built by IBM and companies like PredPol of Santa Cruz, California, sift through vast records of past offences, weather, social media, and other contributing events to display maps showing where offences are likely to occur, prompting police to boost patrols at specific times.

As more and more data streams become available to enforcers, and the intelligence of the algorithms continues to grow, the scope of automated policing is likely to expand, applying to many more types of offence.

While many advocates argue that it can only be a good thing to reduce crime, for some, this trend is worrying. As smart cities and the "internet of things" come online – with their ubiquitous sensors and tracking devices – algorithms will be able to monitor and punish human behaviour to an unprecedented degree. The possibilities this raises are scary, says Gregory Conti, a military intelligence officer and director of the Cyber Research Center at West Point, New York.

Sensors in roads, buildings, cars, personal devices and more would allow rigorous enforcement of even the most minor infractions (see "No offence intended"). Consider driving in a US state where all of the following are misdemeanours. "Float a mile or two above the speed limit that is a ticket, turn your wipers on and wait a minute before turning on your headlights that is another ticket, roll through a stop sign at a deserted intersection that is another. We would all lose our licences in a matter of hours, if not minutes," says Conti.

Nothing to hide?

Of course, defenders of automatic policing say that if you have nothing to hide, you have nothing to fear. But no code is bug-free.

Slip-ups like the driving licence error that falsely implicated John Gass are far from isolated. Last year Kansas City resident Steve Mills was reportedly handed an arrest warrant because an automated red light enforcement system had sent a penalty notice to the wrong address, triggering the warrant when he didn't pay up. And there are plenty of examples of computer errors adding innocent people to no-fly lists – even senators and toddlers.

For this reason, humans must be able to intervene in automated policing, says Mick Neville of the Metropolitan police's CCTV unit in London. "The computer should be the slave, not the master." It may be acceptable for minor traffic fines, argues Neville: "No one's really hurt, you might say; but we shouldn't start arresting people and putting them in prison because a machine says so." Still, it happens: in the early 2000s, a British pensioner called Derek Bond was held for three weeks when his name was flagged at passport control while holidaying in South Africa – the fault was crossed wires in a database of wanted international criminals.

For many, the concerns run deeper than errors. The fact that these algorithms are often proprietary, with decisions that can be impossible to unpick, challenges fundamental principles of the law. "One of the tenets of a democracy is that citizens know the nature of why they are being investigated or stopped for a crime," says Gus Hosain, executive director of rights group Privacy International. "However, it is unlikely that the inner workings of the algorithm – the analysis that makes the decision – would be shared because law enforcement agencies would not want to divulge the data points they use, nor the reasoning."

The problem of hidden reasoning may apply particularly to forecasting crime. As a possible harbinger, consider that many parole decisions in the US are now assisted by machine-learning algorithms. By applying statistical analysis to a prisoner's characteristics and past crimes, these advise parole boards if a person is likely to reoffend – but do not reveal why. Without caution, widening the use of such opaque algorithms could lead to a world akin of Kafka's novel The Trial, in which a man stands accused but has no opportunity to defend himself.

Another concern is that law books were not written to account for the "perfect" enforcementprovided by tireless, all-seeing security systems. For centuries, law has been moderated by human police officers, jury-members and judges. Society is built on a bit of contrition here and discretion there. "While human judgement can lead to capricious and ad hoc interpretations and unequal justice, it can also be used to provide for the exceptional cases, which cannot necessarily be foreseen ahead of time," says Andrew Adams of Meiji University in Tokyo, Japan. Consider the driver who busts the speed limit rushing a critically injured person to hospital, for instance. For an algorithm, everything is "a binary decision", says Adams.

Discretion also allows law enforcers to prioritise effort on offences that most deserve the attention of a judge and jury. "An officer can devote more resources to investigating the theft of a vital piece of medical equipment than the theft of an iPod," says Woodrow Hartzog at the Cumberland School of Law at Samford University in Birmingham, Alabama. "It seems like we are a very long way from being able to convert such complicated decision-making process into code."

It is possible, then, that algorithmic enforcement may place unanticipated pressure on the justice system, by feeding in a river of minor offences, and "false positive" identifications like that of John Gass, with a commensurate rise in court appeals. "In the same way that defendants routinely challenge the use of radar guns, speedometers and other technologies used to collect evidence of a crime, so too can prosecutors expect that other technologies in automated law-enforcement systems will be challenged," says Hartzog. For example, in 2010 a man in Ohio attempted to challenge a speeding ticket in court by displaying contradictory GPS location data from his cellphone. With all this additional legal wrangling, Conti warns that courts may be "unable to handle the load, clogging the court system... and due process will inevitably suffer".

Perhaps the most profound change promised by algorithmic enforcement, though, is how it could affect our day-to-day behaviour. Are we prepared for the consequences of "perfect" enforcement? Our everyday decisions would inevitably change with the knowledge that we are constantly watched by algorithmic punishers, Hartzog explains, ready to fine or arrest us for any slip-up.

"If individuals know they are likely under surveillance, they have decreased autonomy and are less likely to engage in important kinds of behaviour necessary for human development," argues Hartzog. "For example, surveilled individuals are less likely to voice opposition to the government or support for unpopular opinions." Does the risk of letting some people get away with wrongdoing balance the threat to the liberty of millions of citizens?

Automated policing certainly promises to tackle crimes and misdemeanours that once went unnoticed. And we have the technology to expand its scope so that many more types of behaviour are covered. Yet if we do embrace this world of algorithmic enforcement, it will come at a price.

This article appeared in print under the headline "Penal code"

No offence intended

Never break the law? Perhaps that's because you get away with it without realising. Many people accidentally commit lesser offences and civil misdemeanours every day. In a world of ultra-efficient algorithmic policing, punishing them could get easier.

For example, in some countries, it's an offence to be drunk in public or en route to a sporting fixture. You don't even need to be sozzled in many US states – simply holding an open container of alcohol on the street is enough. Today, police often either turn a blind eye, or simply fail to catch people. But what if it were easy to monitor public drunkenness through ubiquitous, automated CCTV? Could an algorithm send you a fine for staggering home from the pub? The financial incentive for unscrupulous authorities is there. And the technology too: computer scientists in Greece have already developed algorithms that watch for evidence of inebriation by analysing certain patterns in temperature and blood flow to the face.

Similarly, sensors and face-recognition algorithms monitoring future roads could help law enforcers fine more minor misdemeanours than today. For instance, to cycle lawfully in the UK at night, cyclists must have lights, and amber reflectors on their pedals – many are unaware of this civil misdemeanour, but they escape punishment because it's too time-consuming to enforce. And to the surprise of many pedestrians in the UK, where jaywalking is legal, an algorithm could soon catch them failing to use a road crossing in the US, Europe, or Australia – where it is illegal.

Then there are myriad offences accidentally committed online daily: in the UK, it is contempt of court to tweet the name of an alleged victim of a sexual assault during a court case, and people using social media to breach UK injunctions have also faced legal action in recent years.

It's become clear this year that organisations like the US National Security Agency are capable of using algorithms to monitor our every move online. While today such systems are on the lookout for terrorism, it is conceivable that officials of the future might use the same information for tracking and punishing offences that, accidentally or not, you probably committed yesterday.

One of the few times in my life that I am glad I'm getting old: i really don't think I want to live in a world ruled by algorhythms....the ramifications of artificial intelligent algorhythms are enormous....

I only wish there were meds that could take care of the problem.
Unfortunately I think the corruption is so deep that it will take a great deal of struggle and violence, to make things better again.
But even that won't last forever.
Revolutions probably have to happen every 400 years or so.
It appears to be the nature of government and bureaucracy.