Non-snowy summertime imagery at left compared to synthetic snowy imagery over the same location at right

Generating realistic but synthetic images to test automated change detection algorithms

By Christopher Ren — Chick Keller Postdoctoral Fellow in the Space Data Science & System Group, Intelligence and Space Research Division (ISR-3) at Los Alamos National Laboratory. Chris’s research interests include anomalous change detection, multi-sensor data fusion, remote sensing, and machine learning.

Background

One of the techniques developed at the Intelligence and Space Research Division at Los Alamos National Laboratory to analyze satellite imagery time-series is known as ‘anomalous change detection’ (ACD). In the ACD problem setting, we point the algorithm towards a pair of images taken at the same location but at different times and ask the question “which pixels/regions of the image have changed in an unusual manner compared to the majority?” This helps distinguish ‘interesting’ changes such as construction or human activity from ‘pervasive’ ones which dominate the image like seasonal changes in vegetation, changes in look-angle, illumination, and more.

It was while trying to come up with ways to synthesize data for this problem that I came up with the idea of using a generative adversarial network (GAN) in an image-to-image translation setting to generate these pervasive changes. The idea seemed a little crazy (and difficult to implement!) at the time but thanks to the Descartes Labs Platform I got up and running in no time.

The idea behind using the GAN to transform imagery was to generate various types of pervasive changes in a controlled fashion, which would enable us to generate realistic but synthetic changes to images and then apply various ACD algorithms to these changed images and evaluate their effectiveness. For example, creating simulated snowy images to test against algorithms designed only to pick up changes in the built environment.

Here’s how I did it.

Data Acquisition

I first settled on a ‘transformation’ I wanted an algorithm to learn: applying snow to imagery. My thinking was that this would be the easiest transformation to learn as it would present the biggest contrast between the before and after scenes. I’d initially hoped this would also provide the most striking results.

So I proceeded to trawl through news articles in order to find reports of snowfall in time frames covered by Sentinel-2 imagery and then used Viewer to draw bounding boxes across areas of interest from which to sample imagery.

After finding appropriate images using Viewer, I used the Descartes Labs Platform to divide my AOIs into 512 x 512 tiles and finally pulled the data using the platform’s Scenes API, making sure to filter by cloud fraction.

Using Viewer to draw bounding boxes across areas of interest from which to sample imagery
Using Viewer to draw bounding boxes across areas of interest from which to sample imagery.

Ultimately, refining the datasets using simple image histograms enabled me to reduce contamination of the datasets (snow is bright!!). Here are some examples of images from the two datasets:

The final separated, cloudless datasets consisted of approximately 1,200 512 x 512 Sentinel-2 images for each category
The final separated, cloudless datasets consisted of approximately 1,200 512 x 512 Sentinel-2 images for each category.

Horses and Zebras

In order to learn the non-snow to snow transformation, I elected to use a CycleGAN. This is a pretty handy architecture in the image-to-image translation domain as it does not require paired inputs — which would have likely resulted in a much smaller dataset. Instead, you feed the GAN unpaired sets of images and it learns common characteristics of these sets and transformations between them. This is doubly handy because if the learning procedure is successful you get to transformations for the price of one!

CycleGAN actually consists of a pair of GANs: one to learn a ‘forward’ transformation, and one to learn a ‘backward’ one. The in-tandem training of these two GANs enables CycleGAN to operate on unpaired sets of images through the cycle-consistency loss. In an analogy to linguistic translation, if a sentence is translated from English to French and then back to English again, cycle-consistency ensures that this sentence when translated back is the same as the original sentence. This is shown in the image below and using the horse-zebra (if you squint really hard you can see the resemblance between this translation and the snow-non-snow ones!) translation from the original CycleGAN paper.

A composite of several images taken from Zhu, J.Y., Park, T., Isola, P. and Efros, A.A., 2017
A composite of several images taken from Zhu, J.Y., Park, T., Isola, P. and Efros, A.A., 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223–2232)

Somewhat surprisingly, the CycleGAN training went off without a hitch, after about 500 epochs here are some results. The left column below shows the original ‘real’ images, and the right column consists of the CycleGAN transformed images. We can see that the ‘texture’ of the snow transformation has been learned and applied successfully by the GAN!

For details and more shameless self-promotion check out the paper here.

The left column above shows the original ‘real’ images, and the right column consists of the CycleGAN transformed images
The left column above shows the original ‘real’ images, and the right column consists of the CycleGAN transformed images.

Finally, given the success of the CycleGAN, I looked at another seasonal transition. By sampling images over New Mexico, California, and New England I created a ‘vegetated — drought’ dataset and learned transformations between the two as well: again on the left are the real images and on the right are the CycleGAN transformed ones. The GAN is able to go in either direction, with ‘vegetated — drought’ shown on top and ‘drought — vegetated’ at the bottom. You could argue that these transformations are even more striking than the snow non-snow ones!

The real ‘forested’ and ‘dry’ images in the left-most column are from Virginia and California respectively
The real ‘forested’ and ‘dry’ images in the left-most column are from Virginia and California respectively. The right column consists of the CycleGAN transformed images.

Examples like these may provide even more learned transformations to test change detection algorithms against. In our SPIE paper we note that the transformation actually introduces some very subtle artifacts, while this is sub-optimal from a change detection perspective, it does provide an interesting avenue as a data synthesis method for the detection of ‘deepfake’ satellite images, which will probably be the direction of future research on these CycleGAN transformations.

That concludes this blog post about using GANs to generate changes in satellite images. I would like to thank Descartes Labs for providing the tools and guidance to make this project a reality!


Thanks Chris! Stay tuned for more featured work by our external users!

Check out our platform page, view our webinar or drop us a line to discuss how the Descartes Labs Platform can help accelerate your productivity.