Self-supervised learning for satellite images
Title: Self-supervised learning for satellite images
DNr: Berzelius-2022-33
Project Type: LiU Berzelius
Principal Investigator: Sebastian Gerard <sgerard@kth.se>
Affiliation: Kungliga Tekniska högskolan
Duration: 2022-02-15 – 2022-09-01
Classification: 10207
Keywords:

Abstract

Modern satellites provide large amounts of images every day. However, to make them usable in traditional machine learning, experts would need to annotate them, which is very expensive and time-consuming. Self-supervised learning can make use of these large amounts of data to pre-train models by automatically creating useful labels without human supervision. In this project, we want to look at two applications of self-supervised learning for satellite images. 1. While many applications for self-supervised learning on images rely on the standard methods that were created for datasets like ImageNet, specialized versions have gotten more attention in recent years. These methods make use of domain-specific knowledge and data to improve the self-supervised learning process. We are investigating a possible fail-case in a certain method that has been proposed in varying forms in multiple papers. This requires repeated experiments under varying conditions, to ascertain the conditions under which failure occurs. By identifying the conditions under which this method fails, we will be able to give practical advice to practitioners to improve their model's performance. This will be especially useful for methods of change detection, but could possibly also impact applications like land cover/land use or cloud detection. 2. An important problem in machine learning is the question of generalization. With respect to satellite images, the question becomes whether a model can generalize well from training images of region A to images from region B, without having seen any data from region B. We want to investigate whether self-supervised learning is able to alleviate this problem, by including unlabeled images from region B in the self-supervised pre-training of the model. If the experiments confirm our hypothesis, practitioners would be able to easily improve their model's generalization ability by including unlabeled images of the regions of interest in their pre-training. They could forgo the manual labeling. Additionally, this could reduce the problem that even if manual labeling is possible, some desired images might not exist. For example, if an area was never flooded before, no images of how it looks while flooded exist. But we can always include images of how it looks before flooding in self-supervision, which might help the model generalize to this region.