Article category: Mining
Planetary Scale Intelligence for the Mining Super Cycle
Sentinel-2 false color image of an area near Death Valley National Park, CA.Enhancing mineral...
Article category: Science & Technology
Computer vision (CV) algorithms are used to process data with two dimensional structure (spatial) and an additional dimension holding spectral information. They aim at extracting relevant features from the imagery in order to solve complex problems such as image classification, object detection and segmentation.
In the past decade, these CV algorithms became increasingly more performant due to the advances in deep learning and in particular with the introduction of new convolutional neural network architectures. The majority of these algorithms were developed and trained on natural images with 3 band input imagery. However, the ability to extend these algorithms to multi-band, geospatial imagery has been demonstrated by either re-training the models on these specific datasets or fine-tuning existing models. In many cases, using a fixed, pre-trained backbone architecture and just training the decoder works very well.
The choice of the various approaches depends on the amount of training data available and the complexity of the problem. Common model architectures for geospatial data are the U-Net or feature pyramid networks (FPN) with a backbone architecture often used in classification tasks such as ResNet or EfficientNet.
Developing a model related to a computer vision task can be divided into three main parts:
The steps mentioned above are essential in most applications. Oftentimes it is desirable to deploy a model quickly and inspect its output in an interactive way. In the case of geospatial data, this entails the ability to view the input imagery on a map, define the area over which to deploy the model, run model inference and show the model output on the same map without writing additional code. In the remaining section we will describe how this can be accomplished with Descartes Labs tools and illustrate three use cases.
The Descartes Labs tool for quick model deployment is called workflows which is a dynamic compute engine providing common operations which are executed on the fly and visualized on a map. With workflows any image data or derived products within the Descartes Labs catalog can be accessed seamlessly. The underlying map functionalities are based on ipyleaflet which, together with ipywidgets, allows the user to add interactive components such as buttons, sliders or draw controls.
These elements are linked to actions such as image retrieval and processing, model inference and displaying the model output. Much of the image processing is handled by the compute engine and is easily scalable due to task parallelization. Below we illustrate three applications we built for different use cases.
This application is used to interactively deploy CV models trained to generate cloud + cloud shadow masks. As different model architectures are trained on a number of datasets, we want to be able to compare the model outputs over the same area of interest. Instead of defining these AOIs beforehand and run model inference for each model iteration, we can use the application to draw an AOI on the map which triggers the image ingest and pre-processing, model inference and post-processing of the output including visualization on the map. All models that are being tested are run simultaneously and their output is shown as different layers. In the video below, only one model is used first which produces two output layers, one for each class (cloud and cloud shadow). For the second run, we added another model so we can compare their outputs. Both models are run simultaneously and produce four output layers. When run in a new location, both models are being used automatically.
This application is similar to the cloud detection one above in that it allows to define an AOI on the map which triggers the processing. The model, however, is slightly different and takes two images taken at different times as input from which it predicts a map of generic change. The image pre-processing is a bit more complex in this case as we compute cloud free composites over a timeframe of two months. All this is done dynamically, on the fly, with workflows. Then model inference is run and the output is post-processed and displayed on the map.
In order to test the Segment Anything Model (SAM) and its applicability to geospatial data we built an application which allows to specify a particular input imagery, define an AOI and run the model based on input prompts. The prompts are interactively placed on the map defining foreground and background examples of the objects of interest. The ability to run the model with the press of a button and immediately see the vectorized output on the map makes this a useful tool for fast testing and exploration.
Many more applications which utilize the Descartes Labs tools are imaginable. They all benefit from the easy access to geospatial datasets and the scalability of the compute loads. They also help with fast prototyping and exploration and make model deployment quick and easy.
💡 If you are interested in learning more about Descartes Labs and how our technology can be applied to your goals, reach out to our team.
Article category: Mining
Sentinel-2 false color image of an area near Death Valley National Park, CA.Enhancing mineral...
Article category: Mining, Science & Technology
Synthetic Aperture Radar (SAR) provides a valuable remote sensing tool, and this blog post will...
Article category: Science & Technology, Company News
What brought together a 153-year-old grain trader and a tech startup in an office full of bean bag...
Article category: Consumer Goods, Climate Solutions
Learn how we leverage key datasets from the Descartes Labs Platform to attribute risk from...