But with all of this talk about computer-controlled vehicles taking over for humans, who is going to take the blame when things go horribly wrong? We touched on this topic back in June, when survey participants were asked how a vehicle should “respond” when presented with different scenarios that could either protect the driver and passengers on-board or put its passengers in danger to protect the lives of pedestrians in the car's path.

The death of the Tesla driver and the potential for self-driving cars to harm or kill others leads to the question: who is at fault when someone dies? Should the would-be driver/occupant be charged, or should it be the vehicle manufacturer? Or maybe the manufacturer of the technology onboard should be on the hot seat?

Tesla Model S w/Autopilot

The folks over at Gizmodo posed questions to experts in field as to what would happen legally in the event of a vehicle-related death. Jean-François Bonnefon, of the Toulouse School of Economics, explains that any autonomous accident is going to involve complex layers of culpability ranging from the vehicle manufacturer to the various suppliers.

We are looking into various forms of shared control, which means that any accident is going to have a complicated story about the exact sequence of decisions and interventions from the vehicle and from the human driver… It’s not going to just be the car; it’s also going to be all the broader systems within the city that allow the car to navigate. So [we could] see accidents where the blame is really on the public infrastructure that did not allow the car to make a correct decision.

Interestingly, Patrick Lin, an ethics director and associate professor at California Polytechnic State University noted that industry leaders will attempt to distract from any injury or fatality that occurs:

Industry will try to point to a lower injury/fatality rate with autonomous cars vs. human-driven cars to distract from the accident. This is the PR equivalent of “Hey look, a squirrel!”

In fact, this is exactly what Tesla did when the first Autopilot death occurred. The company issued the following statement at the time:

This is the first known fatality in just over 130 million miles where Autopilot was activated. Among all vehicles in the US, there is a fatality every 94 million miles. Worldwide, there is a fatality approximately every 60 million miles.

Lin went on to say that it could be a free-for-all with respect to lawsuits, with the vehicle owner, auto manufacturer, supplier, insurance company and even regulators being likely targets.

Given the stakes, the industry can’t afford to make any missteps, since they can be fatal. This isn’t like beta-testing office software, where a crash means lost data. With robot cars, the crash can be all too literal, and the payment is in lives.

We are in for a period of uncertainty once fully autonomous vehicles start hitting the roads and mingling with human-controlled vehicles. Lots of legal wrangling will also likely take place in the next few years, as many auto manufacturers plan to offer the first production worthy fully-autonomous vehicles by the start of the next decade.

With the ethics and best practices of self-driving car safety decision making presumably hashed out, the legal ramifications of the potential impact of autonomous vehicle-involved accidents will likely take years to work through the courts as well. So who's to blame? Sound off below with your input.