A new deep-learning algorithm studies aerial photographs after fires to identify damage.

How it works: From satellite images taken before and after the California wildfires of 2017, researchers created a data set of buildings that were either damaged or left unscathed.

The results: They tweaked a pre-trained ImageNet neural network and got it to spot damaged buildings with an accuracy of up to 85 percent.

Why it matters: After a disaster, pinpointing the hardest-hit areas could save lives and help with relief efforts. The researchers also released the data set to the public, which could improve other research that requires satellite images, like conservation and developmental aid work.