Blurry no more?! Using Deep Learning to Harmonize Satellite Imagery Across Resolutions
Sep 9, 2019 · 5 min read
If you’ve ever gone online to find your favorite spot on a map and, upon zooming in, you were left disappointed because the images were blurry, pixelated, or otherwise not up to par, then keep reading.
I recently decided to explore the topic of resolution improvement as it specifically pertains to satellite images. But before I jump into the nitty-gritty of how such improvements can be made, let’s take a step back and examine the problem, itself, in a bit more detail.
We all want the sharpest pictures possible. Happily, computer vision researchers have already spent countless hours developing methodologies for obtaining higher-resolution (sharper) images from lower-resolution (blurry) images. Thanks to those efforts, tremendous progress has been made — and if you have doubts, just check out all the super-resolution papers or, better yet, the many TV shows with surreal zoom and enhance scenes.
When it comes to the world of satellite images, however, the approach taken with image resolution improvement could be slightly different.
Over the years, hundreds of observational satellites have been deployed to space and each satellite (or constellation of satellites) collects data at a particular spatial resolution and temporal frequency — that is, the frequency with which an image of a location is taken. In general, lower spatial resolution satellites tend to have higher temporal resolution.
Satellites such as Landsat 8 collect 9 spectral bands at resolutions between 15 to 30 m, while others such as Sentinel 2 collect 13 spectral bands at 10 to 60 m resolution.
Even among satellites that collect the same bands, the wavelengths for the bands might differ slightly. For instance, Landsat 8’s red band spans 640 to 670 nanometers while Sentinel 2’s red band spans 634 to 696 nanometers. Products from varying satellites are current combined for analysis. As a result, there exist petabytes of mismatched satellite data, which could be better matched for a more reliable interpretation.
So this begs the question, is there a fairly reasonable approach to matching satellite data so that it can be combined for subsequent analysis?
Aside from the very cool aspect of being able to combine any pair of imagery products, successful merging techniques open a whole new world of applications. For one, change detection (deforestation, wetlands, urbanization) over time becomes more feasible and interesting, as higher-spatial resolution imagery (which better highlights changes) would be generated from low-spatial resolution counterparts, while maintaining high temporal frequency.
But revisiting the question of how best to derive higher resolution from lower-resolution images, a similar question could be posed for satellite images: how can an image from one satellite be altered so it can be paired with that of another satellite?
An initial approach could be to resample an image from Satellite A to the spatial resolution of that from Satellite B. For example, a pan-sharpened Landsat 8 at 15 m could be resampled to 10 m, and paired with a Sentinel 2 image which already exists at 10 m. Below I show a resampled Landsat 8 (left) and a Sentinel 2 (right), both at 10m. The resampled Landsat 8 product, although usable, remains blurrier than the Sentinel 2 image.
Despite being appealingly simple, there are two drawbacks to this approach: first, the resampling technique does not improve the image resolution, and second, both images are from different satellites and as a result, the pixel intensity statistics are natively different due to unique satellites’ characteristics such as wavelengths used for recording intensities. It’s therefore worthwhile to explore other approaches that account for blur and differing satellite characteristics.
Starting with the question of reducing blur, there are two interesting methods to consider. The first is augmenting a resampled image through a post-processing step, and the second is generating an image of a new size, given a scaling factor. With the second approach, if a scaling factor of 2 is chosen, then an image is generated with twice as many pixels as the original image, thereby improving the image resolution. The benefit of augmenting a resampled image is that you can leverage any gains obtained through resampling and, with a bit more work, possibly obtain a sharper quality image product.
Augmenting a resampled product
A Landsat 8 image resampled to 10 m (the resolution of Sentinel 2) would have some inherent blur to it when compared to its Sentinel 2 counterpart (as seen in the images above). I trained a Generative Adversarial Network (GAN) model to generate Sentinel 2 images from blurred Sentinel 2 images. The blur applied to the original Sentinel 2 image is done in such a way that the blurred Sentinel 2 image is similar to a resampled Landsat 8 image. The blue and green circles on the image show some areas where the improvement in resolution is quite obvious, upon applying the GAN model.
Having trained a reasonable model to retrieve sharper images from blurred counterparts, the resampled Landsat 8 product can be run through the model to obtain an augmented image. Comparing the Landsat 8 image and the augmented image, we can see some of the gains that the trained GAN provides.
This was a relatively easy method to implement with observable benefits. Although the generated image tends to be sharper than the resampled Landsat 8 image, the lower resolution image statistics still differ from those of the higher resolution. Another way to think about it is, the sensor characteristics of each satellite are still preserved. That preservation isn’t necessarily a drawback, but the generated product cannot be fully paired with the higher resolution image. Another limitation is the act of determining the amount of blur needed to train the model. I had to determine a priori the amount of blur needed to mimic a lower resolution product such as Landsat 8.
Another augmentation approach can be made through style transfer. Style transfer has gotten a lot of attention and application within the field of computer vision. For my purposes, I applied a style transfer model for unpaired image translation (more about that process here). Through this model, I was able to generate — from a resampled Landsat 8 product — an augmented image in the style of a Sentinel 2 product, and as you’ll see below, there are some resolution gains from this model — as seen by the blue circles in the image below. Implementing this approach was quite interesting because it required a bit more manipulation (albeit still very doable). One drawback, however, is that training this model was somewhat trickier, thus the presence of pixel saturation when generating clouds (as indicated within the orange circles). In addition, I believe the model could benefit from further training to enable it to converge and better capture the styles of each satellite.
Alternative to Resampling
The alternate approach to augmenting a resampled product is to generate a new image at a different resolution when compared to the input image. Thus for a scaling factor of 2, an input Landsat 8 image (20 m resolution) at a size of 64 x 64 pixels, would yield an image of 128 x 128 pixels (64 x 2 = 128). This is comparable to obtaining a Landsat 8 image at 10 m resolution. Here I implemented a subpixel convolution method (more about that here) to generate the images at higher resolution. The benefit to this approach is that I did not have to determine a priori the amount of blur needed to generate a lower resolution product. Hence, in terms of scalability, this model is quite appealing. In general, this approach is equipped to add some detail to the image, as seen when comparing the blue and orange circles in the image below. However, sometimes (as is typical of GANs), some artifacts appear (in this case, presence of clouds as denoted by green circles.)
As we continue to pursue improved satellite image resolution, a possible worthwhile developmental direction may be the combination of multiple techniques. For example, the style transfer method could be combined with either of the other two approaches to ensure the generated product is not only of higher resolution, but similar in terms of its statistics to the higher resolution source image.
Resolution improvement for satellite images remains highly relevant because satellites are expensive to deploy. Thus an approach which enables closer matching of images from varying satellites — thereby decreasing the need to deploy more satellites — will provide extreme value to multiple stakeholders.
Simone Fobi is currently completing a PhD in mechanical engineering at Columbia University. As a Descartes Labs intern this summer she worked with Clyde Wheeler on applying GANs to satellite imagery.
View Recent Blogs
Sep 8, 2021 · 3 min read
Zoomed out view of mill-level carbon scores. Lower scores mean less nearby deforested area and...
Aug 16, 2021 · 5 min read
Connected for Good: Fireside Chat with Terry Cunningham, CEO at Descartes Labs, and Bish Sen, EVP of Supply Chain at Unilever
Tackling the Pressing Challenge of Deforestation to Help Build a More Sustainable Future Terry...
Aug 9, 2021 · 2 min read
Several bins of varied winter vegetation cover are defined using NDVI thresholds in Pennsylvania ...