Simulations help explain why autonomous vehicles do stupid things

Special test programs hope to help robotic systems make better decisions in short order.

Here’s a riddle: When is an SUV a bicycle? Answer: When it is a picture of a bicycle that is painted on the back of an SUV, and the thing looking at it is an autonomous vehicle.

An edge case from a Cognata Ltd. simulation. If you were an autonomous vehicle, this would look like two cyclists instead of the back of an SUV.

Cyclists painted on the back of an SUV is what’s known in the autonomous-car industry as an “edge case.” This is a situation where autonomous system software understands an odd-ball scene differently from how humans would. The result of edge case scenarios is generally unpredictable behavior on the part of the robotically guided vehicle.

Edge cases like this one are the reason the Rand Corp. reported in 2016 that autonomous cars would need to be tested over 11 billion miles to prove that they’re better at driving than humans. With a fleet of 100 cars running 24 hours a day, that would take 500 years, Rand researchers say.

It’s not just of scenes painted on the back of vehicles that throw autonomous vehicles for a loop. “There are a lot of edge cases,” says Danny Atsmon, the CEO of autonomous vehicle simulation firm Cognata Ltd. “The classic example is that of driving at night after a rain. The pavement can be like a mirror, so you see a car and its reflection. Autonomous systems can interpret the scene as two different cars.”

Cognata, based in Israel, has a lot of experience with edge cases because it builds software simulators in which automakers can test autonomous-driving algorithms. The simulators allow developers to inject edge cases into driving simulations until the software can work out how to deal with them. This all happens in the lab without risking an accident.

“It can take months to hit an edge-case scenario in real road tests. In a simulation that’s not a problem,” says Atsmon.

Simulations like those that Cognata devises are also helpful because of the way autonomous systems recognize situations unfolding around them. Traditional object recognition techniques such as edge detection may be used to classify features such as lane dividers or road signs. But machine learning is the approach used to make decisions about what the vehicle sees.

To an autonomous vehicle system running a Cognata simulation, this doesn’t look like the back of an RV with a scene painted on. According to the color map (bottom), the autonomous software interprets the back of the RV as a weirdly shaped building dead ahead. Edge cases like this one help developers debug the machine learning algorithms that interpret driving situations.

Here learning algorithms handle image recognition. The feature detectors are so-called convolutional layers in software that can adapt to training data. To handle specific problem scenes, developers collect numerous training examples and choose parameters such as the number of layers in the learning network, the learning rate, the activation functions, and so forth. Eventually, the recognition system adapts its features to the given problem at hand. This approach works better than handcrafting features that may handle foreseen problems quite well but break for others.

To help developers of automated vehicle systems, Cognata recreates real cities such as San Francisco in 3D. Then it layers in data such as like traffic models from different cities to help gauge how vehicles drive and react. The simulations are detailed enough to factor in differences in driving habits of people in different cities, Atsmon says. The third layer of Cognata’s simulation is the emulation of the 40 or so sensors typically found on autonomous vehicles, including cameras, lidar and GPS. Cognata simulations run on computers that the auto manufacturer or Tier One supplier provides.

Sensor emulation is particularly important because autonomous cars overcome issues such as baffling images by fusing together information gathered from different types of sensing. Just as cameras can be fooled by images, lidar can’t sense glass and radar senses mainly metal, explains Atsmon. Autonomous systems learn to deal with complex situations by gradually figuring out which data can be used to correctly deal with particular edge cases.

Comments

The “obvious” answer is to have the image recognition software consider the “speed” of the object it “thinks” it is seeing. For example, the RV with the “building” is actually travelling at the speed of the rest of the traffic. A building moving at traffic speed is NOT a building. The same can be said for the two bikes pictured on the back of the other van. Another check could be to evaluate the relative position of the “building” to the edges of the truck. Again, a moving building is not a a “real” building.

OK, so now let’s talk about the reflection on wet pavement of the car in front of you. The reflection moves at the same speed as the real object. So much for considering the speed of the object the software thinks it is seeing. Situations like this show why everybody in the autonomous vehicle community is preaching sensor fusion and machine learning. Otherwise there are about a million special cases you have to think about in advance in order to avoid confusion.