Machine learning (ML) helps us find patterns in data through the use of algorithms. By using algorithms to build models that uncover connections, organisations can make better decisions faster and without human intervention. From a geospatial perspective, an example of using ML is via image analysis in remote sensing work. But how do you choose the most suitable ML model for your project?
The first step in selecting your ML model is to define your problem in terms of what your output will look like. The output of a ML model can be a class label per image, a bounding box around an object of interest, or very accurate class boundaries. The final output depends on your project requirements, your evaluation criteria and the level of detail you want in your results.
For example, if you are required to find the approximate location of tailings dams in your satellite imagery, generating accurate class boundaries would be overkill. You are better off with a bounding box detection or a binary classification approach.
In this Geotech Friday blog, we will look at the different categories of computer vision and image processing tasks and their use cases, all in the context of geospatial and remote sensing.
Image classification models return one class label per image or tile. This is best utilised when the user only needs information about the presence or absence of an object in an image. Using the example below, the user simply wants to know which images contain airplanes, and which do not.
The output of a ML model for an image classification task is a set of probabilities for each individual class. The class with the highest probability is assigned as the output class for the input image.
For image classification methods, visit: https://paperswithcode.com/task/image-classification
We want to train a ML model to predict the absence or presence of an airplane in an image. A sample output from such a model is shown below. The ML model only tells us if there is an airplane in the input image or not. This approach would not be suitable for a scenario where the goal is to count the number of airplanes in an image. The output of an image classification approach is as follows:
Object detection methods go one step further than image classification and output a bounding box around the object of interest. They also return a class label for each of the detected bounding boxes. The output of an object detection model consists of two components:
Using the example below, the user wants to know exactly how many airplanes are detected in the images.
For object detection methods, visit: https://paperswithcode.com/task/object-detection
Suppose we want to train a ML model to detect and count the occurrences of a particular object in a satellite image. Object detection would be the most suitable approach to solve this task. The output of an object detector will look something like this:
Image segmentation classifies each pixel in the input image to its respective class, producing very accurate class boundaries. It is a form of pixel-level prediction because each pixel in an image is classified according to a category.
The output of the ML model in this scenario is a probability map for each input pixel. Image segmentation methods are suited to tasks where the user wants to obtain accurate class boundaries or areas of a particular class.
For segmentation methods, visit: https://paperswithcode.com/task/semantic-segmentation
Let’s consider a scenario where the user wants to explore the vegetation in an area using Skysat imagery (50cm resolution) from Planetscope. The user wants to pinpoint the location of all the above ground vegetation and calculate the total vegetated area. Image classification and object detection approaches won’t be able to fulfil these requirements - image segmentation is the ideal solution for this task. The output of the resulting ML model will look something like this:
Change detection approaches are used to detect changes between a pair of consecutive images that can be further classified into classification, detection or segmentation based on the output that is desired.
Unlike previously mentioned approaches, change detection approaches require an input image pair or a time series to work. The output of a change detection approach depends on the user requirements. Moreover, these requirements will also dictate if we use an object detection based approach or image segmentation.
For change detection methods, visit: https://paperswithcode.com/task/change-detection-for-remote-sensing-images
We can detect wheat harvesting events in the before and after Planetscope images using a segmentation based change detection framework. We need to use a segmentation based approach to be able to accurately pick out the boundaries of a harvested area. An example output for a few harvesting events between November and December 2019 is shown below:
Once you have selected the right ML approach for your project, the next step is preparing a quality dataset for it. Regardless of the selected ML approach, here are some guidelines on how to prepare quality training data: https://newsroom.ngis.com.au/using-the-right-dataset-to-train-your-machine-learning-models-properly
You have identified your project’s outcomes and have prepared a dataset for it. Get in touch with the NGIS team to discuss the prospects of using ML for your project.
About the author: Ammar Mahmood
Ammar is a Senior Data Scientist at NGIS Group.