Who should control your car's software

Cory Doctorow has an interesting piece in The Guardian on the ability of car owners/users to alter the software in their car. That is the broad issue but of course he writes about it in the context of the coming autonomous vehicles.

Doctorow begins by throwing out the red herring that is the trolley problem applied to autonomous vehicles — that is, if your car has a choice between saving one life (your own to make it interesting) and others, what should it choose? Doctorow dismisses this issue as irrelevant in that even if a car were programmed to do this, if car owners disagree, it may be re-programmed — well, so long as such re-programming is in the power of the owner. I’ll come to that in a second but I do want to say that I think it is pretty irrelevant because I cannot imagine any pre-programming being anything other than “saving the most lives.” Yes, that doesn’t take into account nuance but no general purpose programming will do that especially if we can’t find general agreement on the nuance.

But Doctorow’s question is whether someone other than the owner or user of a car should be able to program it and lock the user out? The affirmative answer that, say, a car manufacturer or even a government should be able to lock the user out is that individuals cannot be trusted to re-program in socially desirable ways. Doctorow does not go into this but when we talk about restricting what something can do, if we are doing it sensibly (as usual, a big ‘if’) we would apply such restrictions when there are externalities involved. Externalities mean precisely that what is in an individual’s interest is not in society’s collective interest. Thus, we often regulate and take control from individuals precisely because we cannot trust them to refrain from acting in their own interest. In this situation, if society thinks the “saving the most lives” rule is collectively optimal, then to ensure that rule actually is implemented we need to restrict the incentives of individuals to alter it unilaterally.

There is another rationale related to this one and that is that individuals might do a bad job in re-programming. This may put themselves or other people in danger. Thus, Apple stops people from doing all manner of things with their iPhones. They think they know better. And they may well be right. However, unlike the basic control and incentives issue, this one is surely related to liability law. If Apple thinks people might break their phones if they alter them in certain ways, Apple should restrict that happening. But if people really want to do this, then why not let them with the full understanding that Apple isn’t liable? Actually, this is pretty much what jail-breaking is — you lose your warranty if you do it. A similar issue could arise in cars. It most likely does if you alter the hardware of the car. You could easily make that apply to the software too. That is, of course, if the incentive issues discussed above don’t hold.

Doctorow’s critique hits on any affirmative answer. His argument is that control does not work.

Every locked device can be easily jailbroken, for good, well-understood technical reasons. The primary effect of digital locks rules isn’t to keep people from reconfiguring their devices – it’s just to ensure that they have to do so without the help of a business or a product. Recall the years before the UK telecoms regulator Ofcom clarified the legality of unlocking mobile phones in 2002; it wasn’t hard to unlock your phone. You could download software from the net to do it, or ask someone who operated an illegal jailbreaking business. But now that it’s clearly legal, you can have your phone unlocked at the newsagent’s or even the dry-cleaner’s.

If self-driving cars can only be safe if we are sure no one can reconfigure them without manufacturer approval, then they will never be safe.

This has an element of futility to it. If we think autonomous cars need a restriction to prevent individual’s with incentives that differ from society altering their car’s programming, then it just won’t happen.

Doctorow also argues that such restrictions increase various dangers. He argues that any restriction that locks someone out also provides a means by which a more sophisticated party could infiltrate the software while locking individuals out from fixing it or realising it.

The point here is that central control is inherently fragile and that if Doctorow had to choose he would choose not to have it and instead find a way that preserved individual control to achieve these ends.

But how would that be done? One suspects that autonomous vehicles, to reach their potential, have to work together. That means that they will have to know what they are dealing with. If one car is altered, then that will change how other cars ought to react. But if they do not know if the car is altered then how can they differentiate it from a car that is malfunctioning. For instance, you would drive a little differently if you knew you were surrounded by drunk drivers. Part of the difficulty we face is that we don’t know what we are dealing with.

This line of logic leads us to a conclusion that with user-responsibility and control comes the need for clear transparency. For many user changes, that transparency will not represent a conflict of interest and so it may be fine. But for others, especially the more sinister examples Doctorow uses (like state surveillance or criminal hijacking), transparency is just another thing that individuals may re-program away.

The end conclusion is this: if Doctorow is right, then it is best not to cede control from individuals. There is a precedent to this: the way we operate cars now. It is pretty crazy and I can imagine explaining it to my future grandchildren will be a hoot. However, in that case, we have to realise that the purported gains in safety from autonomous vehicles may not be nearly as large as technologists are hoping for.