Killer Robots Hit The Road, And The Law Has Yet To Catch Up

Much of the conversation about killer machines has understandably focused on unmanned military vehicles. Yet civilian robotic vehicles also present ethical and legal questions. These have been highlighted by Tesla Corp’s recent crossing of the continental United States in a car that largely drove itself and by the first trial in Australia, involving a Volvo SUV, on Saturday.

It’s worth noting that the artificial intelligence (AI) software that allowed the autopiloted journey across the US is freely available as an OS update to any Tesla car manufactured after September 2014. This means cars with this capability are already available to Australian consumers.

Until now, such robotic vehicles have largely been used for non-consumer or testing purposes. That is not least because of the risks they create when driven on public roads.

There were probably three or four moments where we were on autonomous mode at [145 kilometres] an hour … If I hadn’t had my hands there, ready to take over, the car would have gone off the road and killed us.

The fault, Roy said, was his “for setting a speed faster than the system’s capable of compensating”.

AI engineers point out that this makes designing a fail-safe system nearly impossible. Humans are unpredictable, even (especially) behind a wheel. It is in this respect that things become much more challenging philosophically and legally. If the Tesla car had “gone off the road” at 145kmh, it might not have just killed those on board, but others in its path (not to mention damage to animals and property).

Tesla, Meet The Trolley Problem

Philosopher Philippa Foot’s thought experiment, the “trolley problem”, is relevant here. The problem posed is whether it is acceptable to divert a trolley-car that is careering towards five unsuspecting people, who will inevitably be killed, on the understanding that diverting the trolley will result in the death of only one person.

Judith Thomson later posited a supplementary scenario:

As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by putting something very heavy in front of it. As it happens, there is a very fat man next to you – your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?

People commonly respond that they would divert the trolley, but not push the fat man. It’s largely a question of the form of direct and indirect action we take and the proximity between the act and the result – the “causal chain”, as lawyers call it.

...a child on a bicycle darting out onto a busy suburban road. The human driver automatically swerves to miss the child, but in doing so hits a school bus, causing more fatalities than if they had continued on their ordinary path and hit the child on the bike.

Clearly, that decision involves a reaction, not a direct action. Unlike the trolley problem, the driver could not weigh up both options properly or exercise real, prospective choice. This would mean the legal consequences would be different (not murder or manslaughter but more likely negligence).

If the matter had gone to court, the legal issue would then have been what a “reasonable ordinary driver” would have done in those circumstances. That would have taken into account ordinary, instinctive, human reactions in a sudden, high-stress situation. Perhaps both choices would have been legally acceptable because we’re more forgiving about decisions in retrospect.

Decisions Made In Advance Alter The Legal Calculus

A robotic vehicle in the same situation is much more akin to the trolley problem, because humans have to make the decision well in advance. Engineers program an autonomous vehicle to drive with all the variables that entails, and to act in specific ways depending on those variables.

That programming must necessarily take into account how to act when something (including a child) darts out onto the road. It must also recognise that a sufficiently powerful computer system would be able to evaluate the various options in milliseconds and, if unable to avoid casualties, perhaps choose the path of least destruction.

This means that philosophical conundrums like the trolley problem become legal ones, because now we have programmable computers and not unpredictable humans at the helm.

The Tesla situation above can be expressed as follows: [Human Error] + [Computer decision/fault] = [Risk to humans]. Engineers will need to address that [Computer decision/fault] so as to cancel out [Risk to humans] They will also have to consider multiple [risks to humans] permutations and compensate for those too.

If a self-driving car is programmed to respond in a way that leads to death or injury, questions of legal responsibility can be complicated.

Given it’s foreseeable that an AI speeding car may injure someone in its path, how should we program it to behave? If it is likely to cause injury on more than one possible course, which one should it take?

If we refuse to address these questions, are we still responsible by omission because we foresaw the problem and did nothing? And just who is responsible? The driver? The engineer? The programmer? The company that produces the car? Just when was the decision to, metaphorically, pull the lever made?

All these questions require legal direction and guidance on how robotic cars should react to a range of possible situations. This applies not in a retrospective, reactionary way, but from the prospective active decision-making matrix of the trolley problem. Hence, we argued:

Should legislators not choose to set out rules for such eventualities, someone will have to, or at least provide the AI with sufficient guidance to make such decisions by itself. One would expect that the right body to make such value judgements would be a sovereign legislative body, not a software engineer.

Still Waiting For Legislators To Respond

In the absence of legislative responses, it has been left to Tesla’s Elon Musk and his engineers to weigh up the ethical and legal dilemmas.

That was 2008. To date, little has been done to address those problems. They have been left to software engineers and company directors like Elon Musk.

Some jurisdictions, particularly in the US, have begun to examine the safety of unmanned vehicles on the roads, but certainly not at this level. Australia lags significantly behind, despite the availability of Tesla hardware and software.

Most of our national regulatory focus has been on military applications of unmanned vehicles and, to a lesser extent, aerial regulation of drones. Road laws, which are the general province of the states and territories, are largely untouched. The general legal proposition that a human must be “in control” of a vehicle continues to apply, basically limiting the use of autopiloted cars.

This position is unlikely to be sustained. Apple, Google, Audi and Nissan, among others are rushing to bring autonomous cars to market. Technology-hungry Australians will want them.

Legislatures need to act, and the public needs to deliberate on appropriate regulatory action. The conversation about the ethical and legal use of unmanned civilian vehicles needs to start now.

Trending Stories Right Now

After a rocky start with the Pixel 1 (which remains one of the ugliest phones made this decade), a big—but still not fully realised — improvement on the Pixel 2, the Pixel 3 came out and finally made good on Google’s homegrown phone initiative.
And unlike phones from Samsung or Huawei, the Pixel 3 achieved this not by hitting users over the head with tons of cameras or far-out hardware, it did it in the most Google way possible: With nifty software, intuitive design, and AI-powered smarts.

Mark Rober really loves to build things. So when this home electronics tinkerer discovered that some neighbourhood thieves were ripping off Amazon packages from his porch, he did what any self-respecting former NASA engineer would do: He built a glitter bomb made to look like a boxed-up Apple HomePod, and he built it to capture video of the entire thing.