In the last couple of years, companies like Uber, Waymo, and GM’s Cruise have been testing more and more self-driving vehicles on public roads. Yet important details about those tests have been kept secret.

Two Democratic senators are determined to change that. Last Friday, they sent out letters to 26 car and technology companies seeking details about their testing activities—part of a broader investigation into the safety of driverless vehicles. [Jerri-Lynn here: the complete text of the letter sent to Uber can be found in the previous link. The companies the letter was sent to can be found in this Markey press release.]

The Ars Technica piece notes that some of the requested information has already been provided to regulators in California– which regulates of self-driving vehicles somewhat more stringently than do other states, such as Arizona, for instance– where extensive testing is being done and where a self-driving vehicle killed a pedestrian in March.

Other than state regulation, companies have largely been left to their own devices in constructing and implementing self-driving testing protocols, without federal oversight by the National Highway Traffic Safety Administration (NHTSA), the relevant federal regulatory authority. The agency released a preliminary report on this accident last week.

One major quibble with the Ars Technica piece is it suggests that lax oversight of by the NHTSA is a uniquely Trumpian defect:

The [NHTSA)] has broad authority to seek this kind of information from companies in the car business. But under the Trump administration, NHTSA has taken a hands-off approach. Companies have been free to test self-driving cars on public roads without significant federal oversight.

NHTSA’s primary transparency effort has been to ask companies to voluntarily submit “safety reports” detailing the safety features of their vehicles and how those vehicle deal with a number of safety issues.

But as far as we know, only two companies—Waymo and Cruise—have submitted safety reports to NHTSA. And while these reports do provide a significant amount of information about their testing programs, they are fundamentally marketing documents. Unsurprisingly, for example, they don’t include information about “concerns raised by employees about your company’s safety protocols.” By explicitly asking for this kind of information, the senators could help bring to light potential problems with companies’ testing programs—and perhaps help prevent another avoidable deadly crash.

Despite this sad and sorry history of inadequate federal oversight, I certainly hope the Markey/Blumenthal effort shakes out some information on the testing procedures companies are using to assess their self-driving vehicles– although I’m not holding my breath. In fact, to the extent that Congress has concerned itself with self-driving vehicles at all, the emphasis has been more on reducing corporate accountability, rather than on making sure companies hold themselves to transparency and safety standards.

Mandatory Arbitration and Self-Driving Vehicles

Even more important than this basic transparency issue is the extent to which self-driving car companies will be able to limit legal liability via mandatory arbitration clauses. The House last year passed legislation to exempt self-driving car companies from lawsuits, and instead require mandatory arbitration, according to CNN Tech, Loophole would protect self-driving car companies from lawsuits. A related measure, The AV START Act, measure has been unanimously reported out of the Senate Commerce, Science, and Transportation Committee; the full Senate has yet to take action. If such legislation is passed, it would reduce the deterrent effect that possible lawsuits might have on self-driving car companies.

Regular readers know that mandatory arbitration is a key priority for business interests that seek to escape legal liability. Companies include “voluntary” clauses in their contracts, and consumers or potential employees have no choice but to comply– if they want to avail themselves of the service, product, or job. Agreeing to mandatory arbitration means surrendering the ability to participate in litigation, such as class actions.

The financial industry recently succeeded in invoking Congressional Review Act (CRA) procedures to overturn the Consumer Financial Protection Bureau’s ban on mandatory arbitration clauses in consumer financial contracts (as I discussed most recently here; the post links to my previous discussions of mandatory arbitration as well as the CRA). The United States Supreme Court has upheld such clauses in a string of cases, the most recent being this month’s Epic Systems v. Lewis, a 5-4 decision upholding such clauses in a major employment law ruling. In April, the Court added another Federal Arbitration Act case to next term’s docket, Lamps Plus Inc. v Varela. The trend of these cases suggests that if Congress were to exempt self-driving companies from litigation, this use of mandatory arbitration clauses would be upheld (absent a major shift in the Court’s composition).

With legal challenges unlikely to overturn such clauses, the best hope– albeit, perhaps a forlorn one– is to fight such clauses at the political level. With self-driving vehicles, Ralph Nader– at 84 years old– is still on the case, according to CNN:

“Going back 50 years, I’ve never seen a more brazen attempt to escape the rule of safety law, and the role of the courts to be accessible to their victims,” longtime consumer advocate Ralph Nader told CNN. “With their unproven, secretive technology that’s fully hackable, the autonomous vehicle industry wants to close the door on federal safety protection and close the door to the court room.”

Will they succeed? On mandatory arbitration, alas, I hope the answer is no, but fear it will be yes.

Post navigation

48 comments

Mad genius Elon Musk — our new Hawking, now that Hawking’s left — proudly tweeted last Friday about an over-the-air software update which improves the Tesla Model 3’s crappy braking by 20 feet in a 60 mph stop.

If you’ve ever had a personal computer suddenly freeze as a force-fed Microsoft OS update wipes everything, you know that “What could possibly go wrong” is more than a rhetorical question.

Fortunately car-clogged America still has millions of vintage display-screenless, low-tech vehicles available for us “don’t wanter” raging technophobes, who can plainly see that cars (and trucks) suck worse with each passing year.

Yes, I own one of those cars and even repair it myself–no mechanics allowed. And I agree that any car that needs periodic firmware updates (and that’s most new ones these days, no?) should be regarded with suspicion. Internal combustion is a fairly mature technology at this point.

But in the course of this debate let’s not forget that many thousands of people die every year in human driven cars and only a handful in automated cars and only one of those a pedestrian. It’s just possible that reducing the human factor in driving could save many lives. One should also point out that regardless of legal liability the AZ incident was a great PR disaster for Uber and crashed their whole testing program there. The car companies and others have tremendous incentive to make sure such a situation doesn’t happen again since even a get out of jail free card won’t stop consumers from walking away in the showroom.

But in the course of this debate let’s not forget that many thousands of people die every year in human driven cars and only a handful in automated cars and only one of those a pedestrian.

Is this really a fair comparison? I have a hard time believing that the number of actual (on road, not simulated) miles accumulated by driverless cars is really equivalent to the number for human-driven cars for a given time period. Its also my understanding that the vast majority of these tests have been conducted in very carefully controlled conditions and venues.

Going by news reports the number of deaths due to self-drive would have to be less than a dozen including those Teslas which are operating under anything but controlled conditions. Uber’s controlled conditions were obviously deficient (the backup driver was also at fault) while Google/Waymo has been operating freely on public streets in Northern California and AZ without hurting anyone as far as I know although their cars have been struck by human drivers, sometimes vindictively.

And please bear in mind this is a still evolving technology so any conclusions or comparisons would be quite premature. Still I believe I’ve read that the per mile comparison with human driving is in favor of self drive by far…don’t have a link.

Finally if you think robot cars are irrational I’ll just say that I once lived in Atlanta and hurtling along at 80 miles an hour in a metal box surrounded by hundreds of others a few feet away doing the same can make you question the rationality of that as well. Your safety is only partially in your hands. But at least if you wipe out in a crash it will be humans that did it….

I don’t believe in touting technology successes that have yet to be achieved.

Various experts have warned that the effect of the technology, given the considerably difficulty of identifying pedestrians quickly enough, will be to transfer deaths from deaths of people in cars to deaths of pedestrians.

Fair enough, but I do believe we should keep an open mind about technology that hasn’t yet had a chance to be condemned as a failure. While self-drive taxis may seem rather dubious there are other applications–particularly on freeways–where it could be quite beneficial. And these experiments don’t necessarily have to lead to full bore robot driving. The technology being developed could help human drivers to be more competent.

The problem is, we currently see thousands of people die on the roads every year – in the US, roughly 35,000 people.

If we could replace that with self-driving cars that lead to, say, 5000 deaths per year, that would obviously be a very good thing. Do not let the perfect be the enemy of the good. And it goes without saying that current driverless technology is not mature, that’s why they have testing programs.

There is an analogy with nuclear power here; we have been happy to see hundreds of thousands of deaths a year from coal-based pollution (as well as global warming) because of an irrational fear of very rare nuclear accidents that kill a few thousand at most. If we ban or drastically limit self driving cars because of a fear of being run down – disregarding the far more likely danger of being run down by a drunk driver – we may reach the same situation.

Sometimes you have to just dispassionately count the bodies and go for the smaller pile.

FYI, leading scientists in the field cannot even agree on the order of magnitude of the death toll resulting from Chernobyl 30 years after the fact. So, if you have developped a methodology to “dispassionately count the bodies” in the event of a nuclear accident, you should publish it to let the whole world enjoy the fruits of your scientific breakthrough.

comparing an unpleasant fact with a fantasy. (35,000 vs “let’s say 5000) Your analogy fails because there is nuclear energy, (and it’s still melting down in fukushima and I think that’s the proper corollary) and coal energy. Also left out of the case re traffic fatalities at least some of those are due to equipment failure (flat tire, brake line failure, bad master cylinder etc…and those are not going away. Your number is not 35,000 drunks killed people so break down your numbers and try again. It’s 35,000 people died on the road, which makes it a clearly hazardous place and no you nor anyone else has proven self driving cars to be safer except for people who plan to make a fortune on it (imagine the google meeting where they unveiled waymo….{Scene:guy in a black turtleneck with power point projection on the wall behind him, he forms his hands into a steeple,(because tech is a religion) and states with gravity…”if you thought you made a lot of money on google, this is worth WAYMO} The techies I know are planning to get rid of public transportation because it’s going to make them so much money,https://www.sltrib.com/news/politics/2017/08/16/will-self-driving-cars-taxis-make-mass-transit-obsolete/ and perhaps more importantly separate them from the hoi polloi (well we know they won’t be sharing their self driver, that’s for the little people, and gross… ) andhttps://www.theguardian.com/commentisfree/2018/apr/08/may-i-have-a-word-about-self-driving-cars-jonathan-bouquet
It’s a grift.

The point of my comment about highway deaths is that Nader’s push for highway safety seems to have hit a brick wall. We have made the consequences of accidents less deadly but done little to make the accidents themselves less likely. If technology can help with this then it’s unclear to me at least why the attempt should be regarded with so much fear and trepidation. And while it’s also true that companies like Uber and Tesla seem to have acted irresponsibly that doesn’t mean there aren’t reasons beyond greed or corporate fecklessness for pursuing this.

What are the reasons beyond ‘greed and corporate fecklessness’ for which this technology is being developed? What reasons should there be? I think we can agree that saving lives is not why companies are investing in this. That robocars will save lives at all is a conjecture that starts with the premise that eventually AI, processing, and sensor technology will be able to outdrive humans. This seems obvious in the sort of techno-utopian way that we assume that my new phone’s camera will be better than the old. But the learning curve may be long.

You object that the technology can only be proven by use on the streets, which is true, but how many lives will we sacrifice to get to the point of proving or disproving your conjecture? I don’t think a moral argument can be made for human sacrifice on the altar of faith in technology.

What other arguments are there that are compelling enough to let companies risk others’ lives to attract and calm investors, if not to actually make a profit? Robocars remain a fake solution in search of a problem.

Correct, but you missed the reason why the braking was turned off. The pedestrian/object identification delivers a large number of false positives. So basically, to be safe without the human driver (which as we have seen is not an adequate fail safe but we are articulating theory), the car would brake with such frequency and severity as to put off passengers.

I think a red herring is being dragged on front of our noses, with the NHTSA factoid that the car’s own automatic emergency braking system is turned off during times when the vehicle is in autonomous testing mode.

It should not have mattered that this system was turned off during testing, because the autonomous system can brake on it’s own, unless we take literally that braking was turned off and done by the human overseer behind the steering wheel, which isn’t any sort of testing at all. My assumption is that the car stops by itself at red lights and stop signs or behind traffic when called on to do so. Whether or not the system provides a jerky ride is beside that point.

To recap the situation, at six seconds before impact, the autonomous system detected something, it couldn’t accurately identify what that something was and in the interim between six seconds and 1.3 seconds before impact, changed it’s “mind” about what that something was at least three times, never correctly identifying a person walking a bicycle across the road.

At 1.3 seconds before impact, the autonomous system decides something needs to happen, either steer, brake or a combination of steer and brake, and it did nothing. It appears to have frozen in it’s tracks. Physically, the system is capable of doing those things, yet didn’t, which raises the question, where in the system was the failure?

Did a signal to the braking system to apply the brakes get sent and the brakes didn’t respond due to mechanical failure, or not?

If the signal never got sent, which is my suspicion, why would the AI chip make that “decision” when it realized that something must be done 1.3 seconds before impact?

My understanding is that these decisions are performed within the hidden layers of the chip, and therein lies the problem. All one knows is the input layer, which takes the sensor information and displays it to the hidden layers, which then tells the output layer what to do. There doesn’t seem to be any way to look at the hidden layers and go through it circuit by circuit to determine why it did what it did.

Also, there is no way to tell if each chip is internally identical even if they get identical training and the observed outputs are identical, and when put in service, learn as they go, which, to my mind, means they will develop their own characteristics over time.

False positives are detected shapes that do not require braking, like flying hay, empty plastic bags, smoke or steam coming out of a manhole. The way the convolusional neural networks identify them is by analyzing the video feed frame by frame and read the rapid changing pixel color in consecutive or cascade frames. If the shape is moving too fast or rapidly changing its shape, that means that shape has no DENSITY, and the actuators will maintain calculated path and speed.

In this situation, the lack of light and Elaine rapidly crossing the street and pushing her bicycle at the same time, confused the software and the car didn’t have the physical time to stop.

The FACTORY installed safety braking system was entirely disconnected, as long as that system relies on radar and sonar installed in the cars front bumper and has a different obstacle detection that can interfere with Uber self driving system. Having conflicting readings from too many sensors could be a major problem for ANY system using sensors.

The robot did what was designed to do, but that means the same object detection methodology, used by ALL the other companies developing self driving systems, will KILL AGAIN.

The Uber self driving system had its braking available, as long as the car needs to slow down for EVERY turn it makes in traffic, but the NHTSA preliminary report DOESN’T explain how that robot decelerated from 43 mph moving speed to 39 mph impact speed in 1,3 seconds.

Turns out AZ Uber mess was more the fault of Uber than the ‘driver’ , seems they were doing what they were told, as it wasnt their phone they were looking but a company supplied Ipad that was doing analytics work they were told to do. the other flub was that Uber had turned off automatic breaking on the car because it might be a traffic problem

This is the self serving startup narrative based on scaremongering and demonizing human drivers. It’s underestimates intelligence required to drive a car and overestimates the technology available. Another version of fake it till you make it.

The recent Uber accident that killed a woman saw her a whole 6 seconds before the impact and yet could not take evasive action. Defending this kind of immature technology betrays a complete lack of responsibility for human life.

On the other side there are hundreds of millions of cars on the road today making billions of trips daily without incident in all kinds of traffic and weather conditions.

Self driving proponents completely fail to comprehend the sheer scale and diversity of traffic worldwide to focus instead on fantasy narratives on safety backed by zero data and anything approaching the technology to deliver this scale of traffic across all conditions.

Car driven by humans : 1.18 fatalities by 100 million vehicle miles travelled. Total miles driven autonomously by the Uber fleet as of January 2018 : 2 millions. Taking into account the AZ accident, this would make the Uber autonomous fleet around 50 times more dangerous than the average drunk-driving, tinder-swiping american human being. Considering this, I would be very interested in the link you mentionned where “the per mile comparison with human driving is in favor of self drive by far”.

I am not sure you are looking at the correct numbers. according to NHTSA – https://www-fars.nhtsa.dot.gov/Main/index.aspx there are 1.18 fatalities per 100 millions miles driven. That means, if an individual drives 15.000 miles per year, that individual will face the possibility of dying in a fatal crash as a driver, passenger or pedestrain, once in 6666 years, so the cars and road system are extremely safe as they are today. Most of the self driving cars developers recognize this like Chris Urmson in his Recode Decode interview – “Well, it’s not even that they grab for it, it’s that they experience it for a while and it works, right? And maybe it works perfectly every day for a month. The next day it may not work, but their experience now is, “Oh this works,” and so they’re not prepared to take over and so their ability to kind of save it and monitor it decays with time. So you know in America, somebody dies in a car accident about 1.15 times per 100 million miles. That’s like 10,000 years of an average person’s driving. So, let’s say the technology is pretty good but not that good. You know, someone dies once every 50 million miles. We’re going to have twice as many accidents and fatalities on the roads on average, but for any one individual they could go a lifetime, many lifetimes before they ever see that.” – https://www.recode.net/2017/9/8/16278566/transcript-self-driving-car-engineer-chris-urmson-recode-decode
or
Ford Motor Co. executive vice president Raj Nair – “Ford Motor Co. executive vice president Raj Nair says you get to 90 percent automation pretty quickly once you understand the technology you need. “It takes a lot, lot longer to get to 96 or 97,” he says. “You have a curve, and those last few percentage points are really difficult.”
Almost every time auto executives talk about the promise of self-driving cars, they cite the National Highway Traffic Safety Administration statistic that shows human error is the “critical reason” for all but 6 percent of car crashes.
But that’s kind of misleading, says Nair. “If you look at it in terms of fatal accidents and miles driven, humans are actually very reliable machines. We need to create an even more reliable machine.” – https://www.consumerreports.org/autonomous-driving/self-driving-cars-driving-into-the-future/
or
prof. Raj Rajkumar head of Carnegie Mellon University’s leading self-driving laboratory. – “if you do the mileage statistics, one fatality happens every 80 million miles. That is unfortunately of course, but that is a tremendously high bar for automatically vehicle to meet.” min.19.30 of this podcast interview – http://www.slate.com/articles/podcasts/if_then/2018/05/self_driving_cars_are_not_yet_as_safe_as_human_drivers_says_carnegie_mellon.html

What you are using is a fallacy, emotional statement done by self driving cars developers and enthusiasts in order to make people think by adopting this technology they will be part of a bigger better future, by doing essentially nothing.

Interesting thought/question, do we move to a totally no fault model for auto accidents. Model it for medical on workmans comp for example. So states have sort of tried it, and features such as uninsured motorist coverage etc sort of handle the situation, all be it with lower limits. Actually if you ever did universal health care then a large part of the losses from accidents would not be to consumers but to the health insurance pool. I gather that in a number of cases health insurers might sue the driver at fault to recover. (or the health insurance includes a subrogation clause that the insurance company has first dibs on recoveries for medical expenses) So then the other economic losses are funeral costs and lost wages. You might then have arbitration for pain and suffering.

While the computers will mistakes, and depending on how well the code is done (consider how much of that code will come LCC, aka low cost countries, its doubtful it will high quality code), that will make crashes not as reduced as otherwise would expected, as while the news that an automated car has crashed into a sitting police/fire truck is ‘news’, the only reason we dont hear of that today with human drivers, is its not newsworthy(it happens to often) with Congress being in the pockets of business, no doubt short of a big news story(big pile up) they wont help till its in the news (and maybe not then) until they have too (see massacres in schools with no changes);it really is Congress to blame. now i do wonder about how the computer will deal with this, you are driving on 2 lane road With a ravine on your side of the road,going around a curve, and you see a 7 year old girl playing in the road, a semi is in the on coming lane, what does computer so? What does a human driver do?

Mandatory, binding, exclusive and secret arbitration should be prohibited in all consumer contracts. It is manifestly against the public interest. Its adoption is not a choice. It is forced on consumers who have no bargaining power.

It is a direct state subsidy to private corporate profits. It is the asbestos in the legal system. Its damage will keep accruing for decades.

No surprise here, the tech companies are used to using the public as beta testers and this is no different. This could be a toxic political issue on the local level though, because cities and municipalities still hold the power of approval. I can easily see “think of the children” angle being used where “So and so has allowed killer cars to be tested near local schools”.

Of course the point is that soon there will be no other option than self driving cars. That means that your voluntary arbitration clause is voluntary only in a legal sense. You can do the Cuba, walk to work (50 miles each way is good for ya), or voluntarily agree that any accident or mechanical failure is your own damn fault. Oops, I forgot one option; have Jeeves drive you in the ’23 Silver Ghost.

The part of the equation this post leaves out is land use. With an aging population dispersed throughout suburban sprawl, we’ll need lots of self-driving or Uber/Lyft/Via etc. trips because people will ultimately be unable to drive. Why don’t we have neighborhoods where inhabitants can walk to the store, or to work, or to the bus stop? Because sprawl.

And no, the market doesn’t want this. People pay premiums to live in pedestrian-friendly mixed use neighborhoods that make transit financially viable.

None of this is a deep mystery. Jane Jacobs, author of Life and Death of the Great American City says something like this: “Modern [land-use] planning is positively neurotic in its willingness to embrace what does not work and ignore what does…It’s a form of advanced superstition [like 19th century medicine] which thought bleeding patients would cure them.”

Bill Gates reportedly compared the computer industry with the auto industry and stated, “If GM had kept up with technology like the computer industry has, we would all be driving $25.00 cars that got 1,000 miles to the gallon.”

In response to Bill’s comments, General Motors issued a press release stating: If GM had developed technology like Microsoft, we would all be driving cars with the following characteristics:

1. For no reason whatsoever, your car would crash twice a day.

2. Every time they repainted the lines in the road, you would have to buy a new car.

3. Occasionally your car would die on the freeway for no reason. You would have to pull to the side of the road, close all of the windows, shut off the car, restart it, and reopen the windows before you could continue.

4. Occasionally, executing a maneuver such as a left turn would cause your car to shut down and refuse to restart, in which case you would have to reinstall the engine.

5. Macintosh would make a car that was powered by the sun, was reliable, five times as fast and twice as easy to drive – but would run on only five percent of the roads.

6. The oil, water temperature, and alternator warning lights would all be replaced by a single “This Car Has Performed An Illegal Operation” warning light.

7. The airbag system would ask “Are you sure?” before deploying.

8. Occasionally, for no reason whatsoever, your car would lock you out and refuse to let you in until you simultaneously lifted the door handle, turned the key and grabbed hold of the radio antenna.

9. Every time a new car was introduced car buyers would have to learn how to drive all over again because none of the controls would operate in the same manner as the old car.

There seems to be a lot of effort going into making self-driving cars, at least the ‘news’ media have turned them into one of their sources for entertainment. Is there really a demand for self-driving cars?

Well, the Greatests didn’t stop driving when they should have. And the Silents didn’t stop driving when they should have. And nobody suggested robo-cars for them.

So why are are the Silicon Yuppies suggesting robo-cars now? Because they can. The excuses for robo-cars are analogous to the excuses for computerising all the schools and classrooms. The goal is to create a false desire for a needless product in order to sell trillions of dollars worth of computer-related and programming-related stuff.

If you step back it’s weirder than that. The basic Car Thing has been pushed down to commodity level. Today’s 20K car is better in most technical ways than any car you could buy, at any price, 20 years ago.

So they need something “new”. Said new thing is “self-driving”.

But there’s a problem. Before, you could target building the fastest car, or the safest car, or the best combination of price and (one or the other). You could target building the cheapest car.

Only one can win outright that game, but everybody else can adjust their price point or something. The problem with “self-driving” is it has to be perfect. You don’t get to run over Granny only once every 11.2 million miles compared to your competion’s 9.7 million miles. You can’t run over Granny at all.

I don’t know how this sort of incentive system is supposed to work out. I don’t think it can. I’m just gonna grab some popcorn I guess.

Human driven cars kill tens of thousands of people a year (in the first world, don’t even ask about elsewhere). You can’t drive if you’ve had a drink. You can’t safely drive if you are ill, or tired, or too old.

Plenty of reasons for self driving cars to exist. The only reason they don’t is because it’s a very hard problem to solve.

The author mentions “the extent to which self-driving car companies will be able to limit legal liability via mandatory arbitration clauses” being supported by a unanimous House vote and now before the senate as the “AV Start Act.”

I thought at first that we were all going to be bound by this enforced arbitration clause – but it only applies to people foolish enough to ride in a “driverless” car in the first place.

Pedestrians and cyclists who are run over by “driverless” car will still be able to sue. And if your legacy vehicle is hit by a “driverless” car, you can sue, too.

The question is, who gets sued? The car doesn’t have a human driver. Are the people sitting in the car liable for negligence regardless? Is the manufacturer of the “driverless” car liable? The software maker? Who is their insurer?

I can’t imagine State Farm or another insurance company giving a blanket liability policy to Waymo for any damage its “driverless” cars may inflict. So who insures them?

The real danger is that, as in the case of nuclear power, our political class may decide to indemnify the manufacturers/guarantee the profits of another extremely risky and unnecessary piece of tech, and make the taxpayer the insurer of last resort.

As a pedestrian who walks 3 miles a day I’d like to know just how “arbitration” would apply to me were I to be hit by a ‘self driving vehicle’ since I have not and would never sign away my right to sue.
Of course I worry far more about folks texting and driving while on my daily walks.
I saw a light truck the other day which displayed a “Hang up and Drive” sticker on their back window.
Gonna get me one of those suckers.

the worst of it is as a walking pedestrian you are not insured, so after the accident it’s you personally vs a greedy never gonna die ruthless protecting our assets insurance co. It has the benefit for the wealthy that comprehensive car insurance covers you for anything that happens near a roadway involving cars. This is why all commuting/training bicyclists and pedstrians should have comprehensive car insurance so that when you’re out of it on the operating table your lawyer is already talking to their lawyer, (totally effed but that’s the system and you’re in it.).

Actually depending on which policies you have you may have insurance. Start with health insurance, that would cover the medical costs of the accident, then disability insurance covering lost wages etc.
In addition at least some auto policies provide that underinsured/uninsured motorist would cover you.
So in one sense some folks are 1/2 way to no fault insurance. Since auto accidents are only one of the things that could cause such losses. Now of course the insurance policies are expensive.

Companies have been going all out trying to collect vast amounts of driving data, in the hope that will enable AI software to succeed, but the problem is that you need a revolution in the basic AI technology for it to really work.

I bet if you took all the high tech systems (Lidar, computer vision, enormous road data sets, inter-car communication) and just turned them into aids for human drivers you’d have significantly safer cars. But that’s probably not game-changey enough for the VCs.

Indeed. I think a a lot of the issue is that there has been way too much hype about how robots are going to take jobs away from humans – and this hype is not just your typical hype that tends to occur when a not-quite-ready for primetime technology is coming on the scene. It’s politically motivated.

We hear that there is a terrible shortage of workers and we urgently need to import tens and soon hundreds of millions of third world refugees or we will run out of workers and we’ll all starve. And at the same time we hear that automation is making workers obsolete and that’s why wages are stagnant and declining. Of course, both of these are lies (and certainly they are contradictory). We are not running out of workers – if we were, wages would not be flat or declining. And automation is not, overall, reducing the demand for labor – if it was, productivity would be increasing and not decreasing as it is.

But this meme of robots replacing workers is being aggressively pushed by the establishment to try and explain away the deliberate focused policies that are crushing workers, and I think some people are perhaps starting to believe their own propaganda…

Automation does not generally cause wages to fall, rather, automation is an adaptation to expensive labor. And while AI has made great strides lately – and one can never rule out a sudden breakthrough – we are still missing some key aspects of human cognition. Such as: deep learning algorithms can be trained to identify pictures that have kittens in them with extremely high levels of accuracy – but only after having seen millions of examples of picture with kittens and consuming who knows how many megawatt-hours of power. A seven year old child can be shown a kitten just once and then they have it. Until we can crack this and other issues truly flexible self-driving cars will continue to be a science fiction fantasy.

Down here on the ground, there are all sorts of weird things walking, crawling, bouncing, rolling, or just standing around. Our eye-brain systems have spent millions of years evolving to quickly ID threats & opportunities (what’s that? should I run away quick, grab it, poke it with a stick, or ignore it?), so most of us are pretty damn good at that part.

Up there in the sky, there aren’t so many different kinds of Things to run into. I’d trust computers to handle the stick much better than most humans. They’d be really good at avoiding other flying cars (assuming they are networked, and excluding cases where the networks crash or get hacked). Birds would be a problem; that would have to be handled by making the vehicles strong enough to take hits. (Warning: Warrantee not valid in states where Albatross’ live).