Fast and interactive deployment of computer vision models with the Descartes Labs platform
Leveraging satellite imagery for machine learning computer vision applications has come a long way. Here, we discuss and illustrate the fundamentals and applications using the Descartes Labs platform including three video examples.
May 9, 2023 · 3 min read
Computer vision (CV) algorithms are used to process data with two dimensional structure (spatial) and an additional dimension holding spectral information. They aim at extracting relevant features from the imagery in order to solve complex problems such as image classification, object detection and segmentation.
In the past decade, these CV algorithms became increasingly more performant due to the advances in deep learning and in particular with the introduction of new convolutional neural network architectures. The majority of these algorithms were developed and trained on natural images with 3 band input imagery. However, the ability to extend these algorithms to multi-band, geospatial imagery has been demonstrated by either re-training the models on these specific datasets or fine-tuning existing models. In many cases, using a fixed, pre-trained backbone architecture and just training the decoder works very well.
The choice of the various approaches depends on the amount of training data available and the complexity of the problem. Common model architectures for geospatial data are the U-Net or feature pyramid networks (FPN) with a backbone architecture often used in classification tasks such as ResNet or EfficientNet.
Typical workflow for model development
Developing a model related to a computer vision task can be divided into three main parts:
- Curate dataset for model training, validation and testing. This involves gathering image patches of specific size over an area of interest within a specific date range. The Descartes Labs platform handles such requests efficiently through high performance compute to retrieve and mosaic image scenes from the respective sources. Depending on the task, image annotation is required which is often accomplished by dedicated external services.
- Model training. Once the datasets have been curated and saved in an appropriate file format such as TFRecords, model training is performed on dedicated hardware such as GPUs or TPUs.
- Model deployment. The trained model is deployed over new areas of interest for which many of the functionalities of step 1 are used again for image retrieval. This is where the Descartes Labs platform is heavily utilized as it allows to scale tasks in order to process large AOIs.
The steps mentioned above are essential in most applications. Oftentimes it is desirable to deploy a model quickly and inspect its output in an interactive way. In the case of geospatial data, this entails the ability to view the input imagery on a map, define the area over which to deploy the model, run model inference and show the model output on the same map without writing additional code. In the remaining section we will describe how this can be accomplished with Descartes Labs tools and illustrate three use cases.
Interactive model deployment
The Descartes Labs tool for quick model deployment is called workflows which is a dynamic compute engine providing common operations which are executed on the fly and visualized on a map. With workflows any image data or derived products within the Descartes Labs catalog can be accessed seamlessly. The underlying map functionalities are based on ipyleaflet which, together with ipywidgets, allows the user to add interactive components such as buttons, sliders or draw controls.
These elements are linked to actions such as image retrieval and processing, model inference and displaying the model output. Much of the image processing is handled by the compute engine and is easily scalable due to task parallelization. Below we illustrate three applications we built for different use cases.
Use case applications
Comparing different cloud detection models
This application is used to interactively deploy CV models trained to generate cloud + cloud shadow masks. As different model architectures are trained on a number of datasets, we want to be able to compare the model outputs over the same area of interest. Instead of defining these AOIs beforehand and run model inference for each model iteration, we can use the application to draw an AOI on the map which triggers the image ingest and pre-processing, model inference and post-processing of the output including visualization on the map. All models that are being tested are run simultaneously and their output is shown as different layers. In the video below, only one model is used first which produces two output layers, one for each class (cloud and cloud shadow). For the second run, we added another model so we can compare their outputs. Both models are run simultaneously and produce four output layers. When run in a new location, both models are being used automatically.
This application is similar to the cloud detection one above in that it allows to define an AOI on the map which triggers the processing. The model, however, is slightly different and takes two images taken at different times as input from which it predicts a map of generic change. The image pre-processing is a bit more complex in this case as we compute cloud free composites over a timeframe of two months. All this is done dynamically, on the fly, with workflows. Then model inference is run and the output is post-processed and displayed on the map.
Segment Anything Model
In order to test the Segment Anything Model (SAM) and its applicability to geospatial data we built an application which allows to specify a particular input imagery, define an AOI and run the model based on input prompts. The prompts are interactively placed on the map defining foreground and background examples of the objects of interest. The ability to run the model with the press of a button and immediately see the vectorized output on the map makes this a useful tool for fast testing and exploration.
Imagine the possibilities
Many more applications which utilize the Descartes Labs tools are imaginable. They all benefit from the easy access to geospatial datasets and the scalability of the compute loads. They also help with fast prototyping and exploration and make model deployment quick and easy.
💡 If you are interested in learning more about Descartes Labs and how our technology can be applied to your goals, reach out to our team.
View Recent Blogs
Apr 5, 2023 · 2 min read
Descartes Labs Partners with Comtech as New EVOKE Technology Member
MELVILLE, N.Y – Apr. 5, 2023 – Descartes Labs announced today that they will become Comtech’s...
Feb 28, 2023 · 3 min read
Nov 29, 2022 · 4 min read