Researchers find a way to confuse autonomous vehicle cameras

A group of researchers have discovered that a few simple changes made to street signs can thoroughly bewilder machine learning algorithms that can ordinarily tell a stop sign from a speed limit sign. In a paper called Robust Physical-World Attacks on Machine Learning Models, researchers from the University of Washington, the University of Michigan, Stony Brook University, and UC Berkeley describe a program they created for testing purposes that mimics vandalism or graffiti on road signs. Showing the altered signs to autonomous driving systems resulted in a Stop sign being mistaken for a Speed Limit sign 100% of the time. In another case, the autonomous systems thought a Right Turn sign was either a Stop sign or an Added Lane sign.

The researchers say they are trying to come up with a better attack algorithm for testing the comprehension of machine learning systems applied to autonomous vehicles. Previous attack programs of this nature tend to generate unconvincing road sign camouflage that either isn’t realistic or doesn’t fool the machine learning algorithms used in autonomous vehicles.

Turn right? An autonomous vehicle may interpret this sign as a lane change indicator. Here a researcher holds the sign at slightly different angles to the camera.

It turns out that there are difficulties in developing a program that will reliably fool a sign recognition system. Researchers say physical attacks on a road sign must be able to survive such changing conditions as varying distances, angles, lighting, and the existence of debris. Additionally, vehicle cameras will not necessarily produce correctly scaled images as distances change. Adversarial perturbations in the image would need to survive such resolution changes and be correctly mapped to their corresponding physical locations.

Camouflage graffiti stickers tended to make image classifier algorithms interpret this sign as either a 45 mph speed limit or a yield sign.

Researchers used three different road sign modifications (subtle, camouflage graffiti, and camouflage art) that their attack algorithm (called RP2 generated. In poster-printing attacks, they printed a digitally perturbed true-sized image of either a Stop sign or a Right Turn sign, cut the print into the shape of the sign, and overlay it on a physical road sign. Subtle perturbations caused the Stop sign to be misclassified as a Speed Limit 45 sign, the misclassification target, in 100% of test cases. Poster-printed camouflage graffiti caused the Right Turn sign to be misclassified as a Stop sign, the misclassification target, 66.67% of the time. In sticker attacks, they printed the perturbations on paper, cut them out, and stuck them to a Stop sign. Sticker camouflage graffiti attacks caused the Stop sign to be misclassified as a Speed Limit 45 sign 66.67% of the time and sticker camouflage art attacks resulted in a 100% targeted misclassification rate.

Image classifier algorithms tended to see this sign with camouflage art as a 45 mph speed limit or a Lane Ends sign.

The point of all this, of course, was to come up with tougher test cases for the sign classifier algorithms now used in autonomous vehicles. Researchers say in future work, they plan to test their algorithm further by varying other conditions they haven’t yet considered, such as sign
occlusion.