Anatomy of a Robocar Accident

One of the most common questions -- and most controversial -- is what will happen
when a robocar gets into an accident.(1) People ask questions about liability,
insurance, public reaction and reputation for the maker of the car, the
vehicle code and even sometimes criminal responsibility, to name just a few.

Some of these questions are very important, some surprisingly unimportant.
Car accidents are actually the most well understood nasty event
in the world, and the financial cost of them is worked out by drivers, insurance
companies and courts literally hundreds of thousands of times a day.

These huge accident tallies mean that every robocar developer I have spoken to
has declared it to be a primary goal to reduce accidents and especially injuries
and fatalities. At the same time, they all admit that perfection is unlikely,
and some accidents are inevitable. With wide deployment, many accidents and even
fatalities are inevitable. Nobody wants them but we must all plan for them.

I often see it written that the first accident, or at least the first major
injury or fatality, will stop all efforts in their tracks, at least in some
countries. Harsh punishments will be levied, and companies will back away
in fear of the law or bad public reaction. This is possible, and given
the huge potential of the technology, it
is a mistake that should be prevented though better understanding, and the right
application of the law.

People often ask who would get sued in a robocar accident. They wonder if
it will be the occupant/passenger/driver, or the car company, or perhaps
the software developer or some component maker. They are concerned that this is the major
"blocking" issue to resolve before cars can operate on the road.

The real answer, at least in the USA and many
other countries, is that in the early years, everybody will get sued. There will be no
shortage of lawyers out to make a reputation on an early case here, and
several defendants in every case. It's also quite probable that it will be the occupant
of a robocar suing the makers of the car, with no 3rd party involved.

One thing that's very likely to be new about a robocar accident is that
the car will have made detailed sensor recordings of the event. That means
a 360 degree 3-D view of everything, as well as some video. That means the
ability to play back the accident from any viewpoint, and to even look inside
the software and see what it was doing. Robocar developers all want these
logs, at least during the long development and improvement cycle. Eventually
owners of robocars should be able to turn off logging, but that will take some
time, and the first accidents will be exquisitely logged.

This means that there will be little difficulty figuring out which parties in
the accident violated the vehicle code and/or were responsible in the traditional
sense for the accident. Right away, we'll know who was where they should not have
been, and as such, unlike regular accidents, there will be no argument on these points.

If it turns out that the robocar was in the wrong, it is likely that the
combination of parties associated with the car, in association with their insurers,
will immediately offer a very
reasonable settlement. If they are doing their job and reducing accidents, and meeting
their projections for how much they are reducing them, the cost of such
settlements will have been factored into the insured risk projections, and payment for
this reduced number of accidents will be done as it always is by insurers, or possibly
a self-insured equivalent.

That's why the question of "who is liable?" is much less important than people imagine.
Today, the financial cost of car accidents is always paid for, eventually, by the
buyers/drivers of cars as a group. It's either paid for by insurance companies, who
distribute the cost in premiums among a large group of car buyers, or it's more rarely paid for
by car makers, who insure against product liability claims (or self-insure) and factor
that into the price of the car -- where again it is distributed among those who buy the
cars. Always the same people in the end, and until accident rates go down, a fairly
stable amount.

We/they will end up paying no matter what, and the only debate will be
over how much, and who the money flows through. Because nobody likes to have
the money flow through them, there will be battles over this, but to society the
battles are of minimal consequence. The real issue will be how big the
damages are compared to traditional accidents.

It should be noted, however that while it has always been fairly obvious that
creators of robocar systems that cause accidents will be responsible for them,
Volvo, Mercedes and Google have all made public declarations agreeing to that,
and the others will eventually do the same.

The key issues, described below, will be:

Will damages in robocar crashes be more than for a similar human crash? In particular,
will there be extra damages due to the deep-pocketed defendants, or even punitive damages?

In particular, will emotional factors -- extraordinary fear of being harmed by robots -- alter
the cost?

How many crashes will there be with liability from the maker of a robocar to the
owner/occupant, where today the driver at fault has no liability to herself, and collision
insurance covers the rest?

How will the causes of robocar accidents differ in a way that increases the damages?

All robocar accidents will have a different cause because old causes will get fixed and
wireless updates will issue. Will this greatly increase the cost of litigation?

Will the PR consequences of accidents be even greater than the liability costs?

Does the use of machine learning alter questions of negligence and ability to fix
problems?

Is it meaningful to give demerit points or loss of licence to operators of robocars who
just summoned them on a cell phone?

How will this differ in the different liability systems of the world?

As a side note, the fact that the robocar makes the full 3-D recording of the situation
means that in any accident involving a robocar, the vehicles should be able to immediately
pull off the road and not block traffic, as there is no need for the police to do a special
examination of the accident scene. It will take a while for people to get this, though.

The Human Driver/Occupant

The situation is complicated at first, because the earliest robocars, and thus the
first to be involved in crashes, will have a human in the driver's seat. Indeed, the
first cars to be called self-driving are the "super cruise" cars coming out in late
2013 from Mercedes, Volvo, Audi and others. These cars will drive themselves some of
the time, but can't handle unusual situations or hard-to-detect lane makers. As such,
their instructions declare that the self-drive function -- really a next generation
cruise control -- must only be used under the constant supervision of a driver who is
watching the road.

They mean this warning. If such a car gets into a zone where the lane markers vanish or
are hard to see, it will just drift out of the lane, and possibly off the road or into
oncoming traffic. Aside from stern warnings, some vendors are considering having the
car watch the driver's face or eyes to assure they don't look away from the road for
very long. The problem is that this is quite annoying, but if they don't do this, it
seems likely that drivers will do things like text or other complex activities in these
cars -- after all, they even text in today's cars that can't steer themselves!

Worse, drivers may feel safer when doing things like texting and feel comfortable
devoting more attention to these activities once they find they get away with it a
few times. Unlike today, where you look up to see the car in front of you has hit
the brakes and you hit them harder (or crash) these cars will prevent that -- but create
a sense of security which is not correct.

It seems like an accident of this sort is likely here. However, it will likely be blamed
squarely on the driver who ignored the copious warnings that are sure to come with such
cars. (Today, you can't even use the navigation system in some cars without agreeing
that you've read the warnings every time.) There will be debate about whether the
warnings were enough, and whether the system should be blamed for lulling the driver
into that false sense of security. There will be debate about whether it will be
necessary to have driver alertness monitors. But the more interesting debate comes
later.

A short note on human accidents

Human accidents have many causes, though inattention (80%) and alcohol are major ones that
will be unlikely to be seen in robocars. A large number of human accidents are the result
of multiple things going wrong at once. Most drivers find they experience "near" accidents
on a frighteningly regular basis but pure luck prevents them from getting worse.
Every 250,000 miles, luck runs out, and multiple things go wrong. Humans often do things
like look down to adjust the radio. If the car in front brakes suddenly at just that moment,
an accident may happen -- but usually luck is with us.

Robocars will also have accidents due to two things going wrong, but this is likely to be far less
common than it is for humans. That's because the developers will be tracking all these
individual failure components and working to eliminate them so they can never combine to cause
trouble. The cars will never not be looking. They will never drift from their lane. They
will never exceed their judgement of a safe speed for conditions (though they might misjudge
what that speed is.) If they detect a single problem that causes increased danger, they will
probably decide to pass control to a human or pull off the road. Instead, their accidents will be
strange to human thinking, and most likely some single error.

The no-attention car

Eventually cars will appear that still have a driver but overtly allow that driver to
do other activities such as reading or watching movies. That driver might be called
upon for strategic decisions in usual situations, like an upcoming construction zone
or a police officer redirecting traffic, but they would ideally never have to take the wheel
with extreme urgency. That driver might well have to manually drive the car to and from zones
where the car is rated to operate on its own.

In addition, the human driver will almost surely have the authority to direct the car
to perform certain actions the machine would not do on its own, such as exceeding the speed limit.
Like a cruise control, people will expect the car to obey its owner, and we all know the
reality is that driving the speed limit is actually a traffic-obstructing hazard on many highways.

The person's role will liven up the liability debate, and cause some liability to fall
on that driver. Just as above, the cost will still
come out of insurance premiums or car prices, and all drivers will pay. In spite
of this, car manufacturers will be keen to push liability away from themselves and onto
the insurers of the driver.

There are several reasons why that desire to push is rational, and everybody would rather see liability shift
to others if they can manage it that way. Some reasons are detailed below. One serious one
is that car makers have much deeper pockets than drivers and as such might get assigned more
liability for the same accident. They also have the real control over how the system is built
and are more likely to be found negligent if there is negligence.

Another reason is the psychology of economics. Bundling the cost of accidents into the
cost of the car instead of into insurance premiums makes the car more expensive. It doesn't
really raise the cost -- in fact it lowers it -- but consumers are fickle. They will
irrationally balk at buying car a that costs $30,000 and includes insurance compared
to one that costs $25,000 but requires $800 a year of insurance premiums, even though the former
choice pays for itself in far less than the life of the car. To avoid this, car sellers will
need to change their selling models, and charge annual or per-mile usage fees on cars which are
easier to compare to insurance in the buyer's mind.

The pure-robocar accident

Let's leave aside the issue of the human's role in the liability for a moment and return to an
accident where the driving system is at fault. Where the person inside is clearly a passenger
at the time of the accident, not a driver. As described, a settlement offer is likely in
that situation.

The settlement offer might well even be generous by the standards of this type of accident,
because the maker will prefer not
to have a trial that will be even more expensive and which will result in a value
similar to the settlement with lots of legal costs. (Of course, they can't routinely
be too generous or people will see it as an opportunity to abuse the policy.)

The makers of the car may also decide not to settle because they feel the accident
is a good example of a typical accident and want to establish precedents to simplify
the legal process in the future. They'll have a top team of lawyers ready to work to that
goal. Their goal, if they see the car was in the wrong, will not be to win, but
rather to lay down the rules. This may well mean they want to try a case that just
involves property damage, because such cases will escape the emotion of an injury
case and be better for debating principles and facts.

Odds are good that the first accidents will just be fender-benders. After all, only about
1 accident in 5 has an injury, and fortunately less than 1 in 400 have a fatality. (This is
including the approximately half of all accidents that are not reported to police and are
property damage only.)

The plaintiffs (injured parties) may refuse the generous settlement because their lawyers seek
to do a groundbreaking case, and there is the potential for above average or even
punitive damages due to both the deep pockets of the larger parties and the novelty
of the situation.

A trial would look at several types of liability that might be different here.

Strict liability for design defects. Here the tradeoffs of robocars would be
fairly considered, and that they are saving more injuries than they are causing.

Strict liability for lack of warning -- unlikely as I expect the cars will come with
lots of warnings for those who operate them, at least in today's world.

Negligence for things that "should have been foreseen."

The strict liabilities (which don't require negligence) should match existing car
accidents, which have a high cost but are well understood and can be
managed through regular insurance.

The real debate will instead be about why the system made an error which caused
an accident, and in particular if this was due to negligence. Negligence involves
not doing what should have been your reasonable duty, and in particularly in not
preventing specific things you should have foreseen. The maker of the self-driving system
could be considered negligent if the court finds that they "ought to have known"
the problem was there, They would need to have been careless about this and
violated an appropriate "standard of care."

The early cases will be very interesting to people because they will lay the groundwork
for what the standard of care is. This won't be done in a vacuum -- there is lots
of history of product liability litigation, though perhaps nothing like this.

It's likely (though not certain) that the reason for the fault will also be reasonably clear
from the logs. Not only do the logs show what happened, but they can be used to
replay the computer's actions in the accident, looking into the software to see
what part is at fault. The programmers will want to do this so they can fix the
fault, but the way the justice system works, they will also be required to reveal
this to the injured party and the court.

The case will focus on not just why the software went wrong, but what decisions were taken
to build the system that way. They will ask why the mistake was not caught, and who
was responsible for catching it. The plaintiffs will be particularly interested if
elements of the mistake were known and were not fixed properly, and why, for this is
where claims of negligence can be made. They will want to know if sufficiently
diligent testing was done according to appropriate standards of quality for such
systems. They'll want to examine the quality assurance system.

They'll also be keen to find things that were not done that arguably should have been
done, particularly if they were considered and discarded for a reason that can be
alleged is negligent or careless. It is in these areas that punitive damages can be
found. Punitive damages come when the plaintiff's lawyer is able to make the jury
truly angry, so much so that they want to punish the company, and scare everybody into
never doing anything similar again.

The probability of these large damages is generally low, unless the developers are
very poor quality and get no legal advice as they work, but the probability is not zero.
See the discussion below on the emotional component for more details.

It's also not impossible that the developers could have been negligent. That's more
likely with a small, poorly funded development team than one at a big company. Big
companies and good small teams will have detailed testing regimens, and follow industry "best practices"
for testing, debugging, robust design and more. These best practices are not perfect
but they are the best known, and as such would not normally be viewed as negligent.

Getting a ticket

Below I will explain why the concept of a traffic ticket does not make much sense in
the world of robocars, but at least initially, there will be tickets, and today a ticket is
issued in almost all accidents where blame is assigned. The dollar cost of the ticket can
be paid for by the robocar maker if their system was at fault -- that's a fairly minor part
of the cost, but the other costs of a ticket, which can include "points" and losing your
licence, will need to be assigned to a human under today's law, and that is probably the
person who activated the self-driving system.

While such tickets should be extremely rare, people will be afraid of the idea that they
could get points or lose their licence for an action by the car that they had little to do
with except at the highest level. People may be afraid of using the technology if there is
a significant chance of this. Particularly people who already have lots of points, who are
the most in need of the technology.

Fortunately, unless things are going wrong, tickets outside of accidents should be extremely
rare, and within the context of accidents, the court will be involved and will hopefully do
the sensible thing in assigning such blame. If the cars are violating the vehicle code on
their own on a regular basis, this is something the vendors will strongly want to fix, and will
fix, via online update.

If the occupant tells the car to exceed the speed limit, or the prudent speed for conditions,
that's another issue, and it should be clear to all where the speeding ticket belongs.

More complex are the situations where breaking the vehicle code is both normal and even
necessary, particularly for unmanned vehicles. One must be assertive on some roads in order
to get through at all, and everybody does it and nobody is ticketed except in an accident or during
zero-tolerance enforcement days.

Another fascinating issue involves the DUI. On the one hand, getting drunks to switch to
a self-driving system is one of the best things that could be done to make the roads safer.
Unfortunately, the "person who activated the system is the driver" doctrine would mean you were
legally DUI if you did this while impaired. There are reasons this could make sense, because in
situations where the human really does need to take over, you don't want an impaired human, or
in particular don't want somebody to risk this instead of finding another way home. When the systems
are good enough to never need actual steering or braking from the human, the story changes, and
you want drunks to use it whenever they can, and don't want the law to discourage this.

The single car accident and liability to the driver

People often have single-car crashes

A large fraction of accidents and the majority of fatalities involve only one car.
People, it seems, often drive off the road, or hit curbs and trees or otherwise get into
an accident on their own. Drinking and fatigue are major reasons for this.

This is important because in these accidents, liability insurance is not involved, except
perhaps for damage to the tree. Collision insurance is involved, but not everybody has it.
And when somebody kills themselves, often no money is involved, though the ultimate price is paid.

If a robocar damages itself, that's similar to collision coverage. If the occupant is injured,
this is not something entirely new -- drivers sue car makers for product defects frequently -- but
the volume of such torts could go up significantly. In the present, a human might drive sleepy and crash
with no lawsuit. In the robocar, the person might even be sleeping and a claim is likely where there
was no claim before. (Fortunately, robots don't drink or get sleepy, but their crashes will
come for other reasons.)

This raises the bar on how to make the total cost go down. To see robocar accidents cost
less than human-caused ones, and we must reduce not just the number of accidents to
levels safer than humans. We must also drop the total cost including liability to all parties,
even the occupant/operator of the at-fault car.

Types of failures

General negligence requires that the particular type of failure should have been
foreseen. While everybody knows that complex software always has bugs, it's
not negligence to release software knowing that it will surely have dangerous bugs "somewhere."
Negligence generally comes from releasing a product with specific bugs you knew about or
should have known about. In a trial, a specific flaw will be identified, and the
court will ask whether it should have been foreseen.

Nobody can predict all the possible failures, but here are some likely areas. Many
of them match things that happen to human drivers.

Perception failure: The system doesn't see or recognize something on the road. This
could be due to bad sensors, bad software or a particularly difficult obstacle or set of
circumstances. Note that this is, in a way, the cause of 80% of human accidents -- not
seeing something.

Localizer error: The system might be mistaken about where it is, and drive like it's
somewhere else. Usually in this case the car would not get confused about where it is
but instead just immediately wish to stop because it can't figure out where it is.

Planning/driving mistake: The car has good information, but makes a mistake about where
to go.

System/Driver handoff confusion: In a vehicle where there are switches between human
driving and software driving, there may be confusion about the state of the system, and
this may lead the human driver to do unsafe things. This actually is the prime factor in almost
all aviation accidents where automation plays a role in the accident! While such accidents
will be partly blamed on the human, they will also be blamed on the UI and training.

Loss of control: Like a human, the car might skid on slippery pavement or otherwise
lose control. When humans do this, it's often because they were going too fast for
conditions.

System failure: While robocar systems will generally be designed with backups to deal
safely with software crashes or other system failures, it's possible such failures could
cause issues.

Unavoidable accident: Some accidents are just unavoidable -- another accident in front of
you, debris thrown on the road, somebody jumps out from behind a parked car, etc.

Many more types are possible, just as with people, but speculation is best left to the
developers working to avoid them.

Machine Learning

A very interesting issue, in that the law is yet to understand it well, is the question
of errors caused by systems programmed with machine learning. Machine learning is an
AI technique where software is "trained" by running through many example situations in
an effort to learn patterns and make better decisions. For example, you might design
a program so you show it 10,000 pictures of cats, and it distills what it sees in a complex
way until it's able to be very good at identifying a picture of a cat, even one it has never
seen before.

This is a powerful technique, and very popular in AI. Some of the things a robocar does
will be traditional algorithms, but it is likely vehicles will use systems based on machine
learning. As powerful as it is, this opens up some areas of concern.

Firstly, it may be possible to have something go wrong, and not be fully aware of the cause.
Yes, we'll know that the machine learning system failed in some fashion -- perhaps it did not
recognize an obstacle -- but we won't know the specifics as to why. Successful machine
learning usually results in a system that works almost all of the time, but not all of the time.

That "most" of the time can be far better than any other approach, and it can be a lower
rate of failure than human drivers. It will still be a failure, and if the court wants to order
the developers to fix the problem, they may not have a clear path to fixing it. They will
almost surely be able to improve the training of the machine learning system so that it does
not make the particular error that caused the accident, but they may not know why this worked
and won't be certain a similar error won't occur.

Tradition in the law says this is OK, that best or reasonable efforts are acceptable and perfection
is not demanded. That trained systems sometimes fail is foreseeable, though individual failure
situations are not. But emotion goes the other way, especially for the victims and the jury sitting
before them. This principle is also not a panacea. You can build your machine learning system
badly, with insufficient training or testing, and be liable for having made a substandard system.

An interesting analysis of torts and machine learning by Judge Curtis Karnow has a number of interesting things to say about
this principle. The Judge's paper provided a number of useful insights and references for
this article.

That Emotional Response

We have a strong fear of things beyond our control, and let's face it, the idea of being injured
or killed by a robot (by accident or otherwise) strikes fear into the hearts of most of us.
Many people, at least on the emotional level, are far more afraid of being killed by robots than
being killed by human beings. I usually get a laugh when I say that the problem is "people just
don't like being killed by robots..." but a more sombre response when I follow it up with "they
would rather be killed by drunks."

While irrational from a strict numbers perspective, this fear is real. A jury faced with a
robocar-caused injury or death may feel that it's reckless to put two-ton robots on the streets
where they will hurt people, even if the numbers show a superior safety record. If the jury
feels the concept is inherently reckless, no data may sway them, and they may assign high
damages or even punitive damages at a level that could shut down companies or an industry.

There may also be those who will believe the technology should be delayed. Developers
will be keen to release the technology as soon as it is good enough to start saving
lives, but others may feel that the world should wait. We do this waiting with drug companies all
the time because we're much more afraid of causing actual harm than
we are of not preventing greater amounts of harm. Even though the harm of car
crashes is being caused by individual negligent humans, it is not us as a society doing it. The
makers of a robocar -- and the jury empowered to order them -- have a much more direct power
over whether the car goes out and injures the people it actually hits.

Punitive damages are rare. They usually require that the jury be angry, and that serious
disregard for what is right and moral was practiced by the defendants. This should not be
the case here, but one can't ever be certain.

We find for the human

It's possible that courts might have a bias towards humans. Even with a complete record of
the accident which shows fault to the human if it had been a normal accident, a court
or jury might give the benefit of the doubt to the human. As such, robocar defendants
might find themsevles liable in cases where a human would not, where a human argues,
"sure, I hit that car but it was acting in an erratic, inhuman manner."

In this case, the insurance must factor in the cost not just of accidents where the
robocar is at fault, but also some others. This becomes a concern because the rate
of accidents that are the responsiblity of a human driver don't go down much, if at all,
thanks to superior robocar safety. If this becomes so large that it negates the savings,
a legislative solution may be necessary. One seems reasonble, if emotion is the issue.

Not making the same mistake twice

Robocar accidents (and AI and robotics in general) bring a whole new way of looking at the
law. Generally, the law exists to deter and punish bad activity done by humans who typically
knew the law, and knew they were doing something unsafe or nefarious. It is meaningless
to punish robots, but in punishing the people and companies who make them, it will likely be
the case that they did everything they could to stay within the law and held no ill will.

If a robocar (or its occupant or maker) ever gets a correct ticket for violating the vehicle code
or other law, this will
be a huge event for the team who made it. They'll be surprised, and they'll immediately work to fix
whatever flaw caused that to happen. While software updates will not be instantaneous, soon that
fix will be downloaded to all vehicles. All competitors will check their own systems to make sure
they haven't made the same mistake, and they will also fix things if they need to.

As such, all robocars, as a group, will never get the same ticket again.

This is very much unlike the way humans work. When the first human got a ticket for an
unsafe lane change, this didn't stop all the other people from making unsafe lane changes.
At best, hearing about how expensive the ticket was will put a slight damper on things. Laws
exist because humans can't be trusted not to break them, and there must be a means to stop them.

This suggests an entirely different way of looking at the law. Most of the vehicle code is there
because humans can't be trusted to follow the general principles behind the code -- to stay safe, and
to not unfairly impede the path of others and keep traffic flowing. There are hundreds of millions
of drivers, each with their own personalities, motives and regard or disregard for those
principles and the law.

In the robocar world, there will probably not be more than a couple of dozen distinct "drivers" in a
whole nation. You could literally get the designers of all these systems together in a room, and
work out any issues of how to uphold the principles of the road.

The vehicle codes will still exist for a long time, since there will be people driving for many
decades to come. But that system of regulation begins to make less and less sense as robocars
come to dominate the road, and we may look to find something that's more efficient and upholds
the principles of safety, traffic flow and other values better and more simply.

The bad news of not repeating mistakes

When it comes to trials over robocar accidents, the principle above may be a curse for
the makers of the cars. Since every accident will be different, every potential trial
will be different, and the examination of the cause will be different.

That could spell serious trouble because a full legal examination of flaws in code is
very expensive. It takes up a lot of programmers' time and a lot of lawyers' time. In
comparison, judging the motives or actions of human drivers is something we do all the
time and understand well. Insurance companies know they are going to pay -- if not on this
accident then the next -- and want to keep the costs down rather than fight on details.

If robocars have accidents at 1/3 the rate of humans, it's a win for society, but if
each accident costs 4 times as much because they all go to trial in different ways, we have
an overall loss -- passed on to car buyers as always.

In a strange way, the accidents caused by the failure of trained machine learning systems
could make things easier. With these systems, the particular cause of the failure may
not be clear, and there will be less to argue about. As long as the rate of failures is
suitably low, the trade-off is a winning one, even if not so for the particular victims.

There is disagreement with the idea that robots will not duplicate mistakes. In particular
some feel that the systems will be so complex that in many cases, the team who built
the system will be unable to see a clear way to eliminate the fault. They will probably
be able to eliminate the very specific fault, so that the system, playing back the same
sensor data, does not take the same action, but they may find it very difficult to fix the
root cause.

In addition, vendors may be very reluctant to release new revisions of their systems quickly.
The tradition in cars has been that software updates after sale are extremely rare. The
software systems in today's cars go through, in some cases, years of testing before release,
with features and even major fixes frozen out for a long period of time. They may be scared
to put out a new release as it will not have had the same amount of testing. People with a
software background work from a very different view.

Court of Public Opinion

It's easily possible that the damage consequences of an accident could be minor compared to
the concerns for the image of the maker of the car. A number of history lessons suggest
there is a minefield here. Rumours that Toyota cars had a problem with their gas pedal software that
caused uncontrolled surprise accelerations appeared on the front pages of newspapers. The
later reports that showed there was no problem were put on page 22. (The later reports that
the systems were designed shoddily may have been covered even less but the settlement
got back on the front pages.)

Toyota lost a lot of credibility and income over a problem that may never have caused a problem. What
will happen to a company whose system really did cause an accident, particularly if there
are injuries, or in the worst case, fatalities?

It's clear that fatalities involving machines bring up a lot of press attention, and strong
reaction. Accidents with people movers or elevators usually involve several days of shutdown,
even though -- perhaps because -- these accidents are extremely rare and fatalities even rarer.

Here, the existing car companies have a disadvantage, in that they have reputations as
car makers which might be tarnished. If a bad image from a robocar accident hurts sales
of regular cars, the cost can be quite large. A non-car company like Google does not have
a reputation in autos, though of course it is a major company with other large established
businesses which might suffer. A newcomer brand will have nowhere to go but up, but might
find its growth slowed or stopped by bad PR, and it might lose further funding.

Unlike Toyota's accelerator case, it should be clear to most of the public that a failure
of a Toyota self-driving car would not imply a greater risk in driving an ordinary Toyota.
Robocar opponents still might well try to make people feel that way.

Aviation automation is commonplace today, and there have been fatal air crashes due to
pilots who were confused by automated systems or unaware of their state. Generally it is
not believed an automated flight system has flown a plane into a crash without human error,
but this is not surprising as pilots normally take over in unusual circumstances. There are
many cases of incidents being averted by automated systems, and even tragic instances of
incidents caused because pilots overrode automated systems that would have saved the day.
However, all automation involved incidents generate a lot of public attention and regulatory
scrutiny and strike great fear into our hearts.

I have frequently seen predictions of the form, "As soon as there's an accident, this is all
over." There is some risk of this, but with proper preparation I believe such a strong
reaction can (and should) be averted. Of course this depends on the nature of the accident
and how well the makers of the particular robocar did.

Elsewhere in the World

The situation will differ in other countries. For example, New Zealand has no liability for
automobile accidents at all. They realize the truth above -- that the cost is always spread over
drivers and car buyers anyway, and so the cost of all accidents and medical care comes out of
state-run insurance. Regimes like that might be very friendly to development of robocars
because as long as the they lower the accident rate, the government will be happy and there
will be no legal need to argue over details at high cost. While New Zealand is a very small
country without car manufacturing, it could be an example of how friendly a system could be
to development of this technology.

So who is liable?

Everybody will get sued in the first accidents, but over time the law will want to settle
on a set of rules to decide who should have liability. In the end, the owner/drivers will
pay, but lawyers and other players will be keen to settle this issue so they have certainty
and can move ahead with business.

In some states, the law actually includes words to say that for the purposes of the vehicle
code, the "driver" is either the human near the wheel, or the one who activated the self-driving
system. This applies to violations of the vehicle code (ie. a ticket) but not necessarily
liability.

Consider the arguments for putting liability on the various parties:

The person in the driver's seat (even though not driving) and their insurer

This is how we do things today -- it's well understood.

The cost will be based on their insurance and similar factors to today. That could mean
lower damages than a deep pocketed company would be charged.

In the end, a human (or corporation) is always responsible under the law. Robots themselves
are not liable for things.
Generally, this "driver" in recent laws is the one who pushed the
button to activate the auto-driving system, knowing it might fail (even if at a low rate.)

In some cases, this person will have more direct liability if they exercised any manual controls
like setting what speed they wish to go at too high a level.

The car probably came with warnings about the risks of operating the system and this person
agreed to them or even a specific contract. (It's hard to assume liability for other's negligence,
though.)

Paying for insurance costs in the cost of the car is novel and alters car buying economics.
Presumably insurance companies will work out appropriate premiums based on their own more detailed
analysis of the car in question.

The company who built the car and/or self-driving system:

If there's a product defect, they probably were responsible for it. It's unlikely the specific
defect was warned about in documentation or contracts.

They are the ones most in control of the factors that led to the accident. The human was mostly
a passenger who trusted the car and the promises of its maker. This gets even more clear when
all the human did was request a trip on a cell phone which summoned an empty car that crashed
on the way.

If drivers can get a ticket or lose their licence for the actions of the self-driving system,
they will be wary of trusting it.

The plaintiff wants to sue the person with the deepest pockets. (Good for the plaintiff, bad for
the industry and society.)

The vendor will almost surely be regularly updating the software, and will be responsible for
those updates. Other parties have little ability to judge the quality of each update.

The Robotaxi Firm

In the future, people will ride in robotaxis that come when you summon them on a mobile phone and
take you to your destination without you touching the wheel. Eventually there will not even be a wheel.
Here the situations above might apply, but there are a few other wrinkles.

This company probably did a much more specific contract with the maker of the vehicle which governed
liability, and was able to enter into it with much more diligence.

The pockets here are deeper than an individual's, but not as deep as a roboccar maker's.

The passenger is much more removed from any control over the system, though technically it could be
viewed that they activated it by summoning it and telling it to go, since there is no other human
in the loop.

Others

There might be claims against makers of components, such as sensors, controllers, computers or even
the software itself if that is done by an independent vendor. This is already a common situation
in products that may result in harm, all of which are built by a complex web of suppliers and
integrators. These issues have much litigation history, but are probably not special in
the robocar situation other than due to the high volume of use and possible cases.

Public Policy

The results of these first lawsuits might come from a view somewhere between the following
two approaches:

Cars are, in the hands of human drivers, the 2nd most dangerous consumer product allowed to be
sold, beaten only by tobacco. As such, robocars are an effort to improve the safety of this
product, and the standard of care required of their makers is to accomplish that goal. Vendors
may release products which they know will cause some accidents so long as they have met
standard.

Robocars should be considered strictly as machines, independent of the human driving they
are replacing. More like an elevator or automated people mover, no failure causing damage
or injury should be acceptable. They should be as safe as products like elevators or not
sold.

People advocate for both of these approaches. It seems clear the latter approach would
delay deployment considerably. Arguably a number of injuries would take place due to the
traditional human+car system that would have been avoided in a robocar system. Likewise,
some, though fewer, robocar accidents would occur which might or would have not happened
under human control.

Legislative Change

It's probably too early for legislative change, and if the cost of robocar accidents
ends up similar to that of today's accidents, there may be no great desire for it.

With the bias of wanting the technology to be quickly but safely deployed, there are some
legislative approaches that might be looked at in the future:

Laws could cap liability in certain circumstances, attempting to normalize damages to the
amounts typical of human-driver accidents. There are precedents in the law of vaccine
liability and civil aviation liability.

Regulation of insurance could change. In California, for example, the constitution forbids
giving big discounts for having a car that makes you safer. Really. It could be made
easier to move to "pay as you drive" insurance that is charged by the mile, and to have
one insurance plan for miles driven by a person, and a different one for miles driven by the
car.

Standards of care could be clarified, and liability law refined so that developers do not
have to worry about experimenting with potential new safety technologies out of fear that
they might get punished for any decision not to use one.

As noted, some countries have entirely different liability systems, including no-fault
insurance or even no-liability. Nations may switch their systems to encourage technology.

Naturally, laws which normalize damages would reduce awards for some plaintiffs and thus not be seen
as a boon to all.

Disclaimer

The author is not a lawyer and this is not legal advice. While the author has consulted for
teams building robocars, he has not been privy to their strategy planning for accidents, and
none of this is based on any such plans, nor represents in any way the position of any of the
author's associates or clients.

Comments

You may comment on this article at the blog post
where there is additional material on real robocar accidents.