While this may seem like a minor problem, it is in fact a death knell to profitable autonomous vehicle projects.

I have argued for a while that people who understand how far autonomy and artificial intelligence have come, the people that study it, are far less impressed at how much it can accomplish - safely and reliably. These may be marketing-weaponized jargon to impress investors, but they are two terms that have been promised since the 1970's.

Mobile autonomy is not ready. I attended at talk at the Institute of Electrical and Electronics Engineers (IEEE - the people that make standards like WiFi) by Sebastien Thrun at IEEE CVPR 2012. At that time he said it was "90% solved with 90% to go" in a joking manner to a room full of people who understood what he meant, not an audience full of investors with dewey eyes and dreams of striking it rich. He was admitting that the solution isn't present. That's not what they tell investors, is it? He left Google soon after to work on Udacity.

He went on to describe how one autonomous car at that time almost got in an accident when it was driving along smoothly and it stopped without warning and without prediction for a floating plastic bag. The car behind it screeched to a halt.

When I was a Master's student at the University of Alberta, I was awarded 2nd prize by IEEE Northern Canada Section for my adaptive AI Rock Paper Scissors/ Roshambo player that beat all the competitors from the recent RPS championships. I did it with a simple strategy: I made an adaptive strategy that used ALL the other players' strategies against opponents. It chose a strategy from the other players, and over time it weighted choices to the better ones, and would compete in a non-predictable manner. When it started losing it would revert to the games theory Nash equilibrium of 1/3 rock, 1/3 paper, 1/3 scissors and play for a tie. It beat all the others including Iocaine Powder - the reigning champ.

It was a novel approach, but it didn't have any real insight into how it was winning or what key factors underline the strategy. That was my novelty. It wasn't playing a defined strategy. That made it unusual so other computer strategies couldn't store a time-history of moves and predict how to beat it.

So what it did do in effect was present an unusual situation to the other AI agents. And they failed. I didn't beat them, they failed to beat me.

It would be a philosophical stretch of epic proportions to say the mobile autonomy AI are the same as AI games players.

But it is a philosophical stretch of even greater proportions on their part to claim that the AI algorithms that work in defined space games like checkers or Go are up to the challenge of dynamic problems in 4D time-space.

I claim they are similar, yet the mobile autonomy problem space is much more complicated and time varying than the game player problem space is. That supposition is beyond dispute by anyone.

The problem with mobile autonomy is not that it works, is that it only works in the known part of the problem spaces. It can't guarantee a victory ( to drive up to users' expectations ) in unusual situations, i.e. the blowing plastic bag thought to be an obstacle. If your robot car depends on a map of the roads, what happens in a construction zone? What happens when a road disappears or a house is put in it's place? What happens when there is an accident in the middle of the highway? Flying tire? Cardboard box? What happens if a policeman is outside the vehicle gesturing that the car pull over?

I research autonomy. I know the algorithms on the inside of the car. I would not get in an autopilot vehicle.

In fact, I was one of the first autonomous robot wranglers when we made this one in 2005:

I work on this one right now:

Waymo is developing autonomous cars that they are admitting are not autonomous. They are blaming it on drivers getting careless - behaviour which their own testers did during beta-testing - but they are admitting they can't make the vehicle work without the driver almost in control. That makes their AI system an expensive paper weight.

In any case, they are trying to make the driver responsible so they can de-risk their own product, not make drivers any more safe. It's like a reverse-liability Jedi mind trick.

But that won't stop them from being sued or losing huge court rulings against them.

Why that matters to Waymo and Uber and all other neophyte mobile autonomy companies: in the US the product law is governed by strict product liability.

Under strict liability, the manufacturer is liable if the product is
defective, even if the manufacturer was not negligent in making that
product defective.

Slick US lawyers will have no problem pinning the blame for accidents on autonomous vehicles if the human being can't be aware of where the AI fails. They can show the products are defective because the autonomous vehicles fail to understand the simplest scenarios for humans. It won't matter what machine learning they use or how much data they crunch. These lawyers will outline the details of the accident to a jury full of drivers. The drivers will consider how easy it is from personal experience how and what to avoid the accident, and they will see these evil billion dollar companies lying to them about how well their products work. The evidence will be the accident itself, not the assurance nor the technology. If a robot cannot figure out a plastic bag, it isn't ready for the road. As a driver you know that plastic bag might be a dog, might be an unannounced construction zone, might be an oversized vehicle, and so on. That means ALL these products are inherently defective. These are huge liability risks given the state of the art right now. This is a huge unfunded risk to autonomous vehicles.

And the question they will posit that will win huge settlements for clients will be a variation on this:

"I ask the jury to consider: as a reasonable driver, given the facts in evidence surrounding this accident, would you have been able to avoid this tragic accident? If so, then you must find the product defective because it wasn't capable of doing what a reasonable driver can do."

Friday, September 1, 2017

Today, China and USA owe more money than they can possibly repay in the lifetime of their current citizens.

Chinese labour supply are surging and new workers will demand the same rights and benefits American workers have. China will end up saddled with the same entitlements that make first world nations expensive. Companies will leave.

The USA is coming to terms with an ageing workforce that demographically cannot sustain the industrial output of 20 years ago. America risks bankrupting their professional class that owns most of the bond debt if they can't restructure. Companies may never return.

Each has an internal crisis in the making and a unique moment for humanity.

Both are looking within to acquire assets to prop up their debts. Each would find only failure within their borders and the resultant economic collapses would inevitably spread as war to the globe.

Instead, if both China and USA loan money out to the rest of the world, to the developing nations that can grow at 3% or more, and use those external assets to fund their debt commitments back home, they would spread the prosperity outward to humanity and avoid an inevitable internal collapse. They would bring up the standards elsewhere and thereby reduce the need for large military expenditures.

For mankind, this is humanities' unipolar moment.

Either path taken, it was at this instant in history that lead to that fateful outcome.