A new deep-learning algorithm studies aerial photographs after fires to identify damage.

How it works: From satellite images taken before and after the California wildfires of 2017, researchers created a data set of buildings that were either damaged or left unscathed.

The results: They tweaked a pre-trained ImageNet neural network and got it to spot damaged buildings with an accuracy of up to 85 percent.

Why it matters: After a disaster, pinpointing the hardest-hit areas could save lives and help with relief efforts. The researchers also released the data set to the public, which could improve other research that requires satellite images, like conservation and developmental aid work.

Share

Link

Author

Jackie SnowI am MIT Technology Review’s associate editor for artificial intelligence. I cover stories about where AI is currently, where it’s headed, and what’s wrong with the hype around the technology. I also put together The Algorithm, our daily newsletter on the latest in artificial intelligence. Previously I worked for Fast Company and have been published by the New York Times, National Geographic, Wall Street Journal, and others.

ImageEuropean Space Agency | Flickr

Share

Link

Author

Jackie SnowI am MIT Technology Review’s associate editor for artificial intelligence. I cover stories about where AI is currently, where it’s headed, and what’s wrong with the hype around the technology. I also put together The Algorithm, our daily newsletter on the latest in artificial intelligence. Previously I worked for Fast Company and have been published by the New York Times, National Geographic, Wall Street Journal, and others.

ImageEuropean Space Agency | Flickr

Sign up for The Download — your daily dose of what's up in emerging technology