Uncertainty-aware temporal change detection for satellite imagery
Title: Uncertainty-aware temporal change detection for satellite imagery
SNIC Project: Berzelius-2021-54
Project Type: LiU Berzelius
Principal Investigator: Heng Fang <hfang@kth.se>
Affiliation: Kungliga Tekniska högskolan
Duration: 2021-09-28 – 2022-04-01
Classification: 10207
Keywords:

Abstract

AI for satellite images is really beneficial for human development and disaster response applications, which corresponds with the 17 Sustainable Development Goals (SDGs) proposed by the United Nations such as SDG 11 (sustainable cities and communities) and SDG 13 (climate change). When it comes to SDG 11, transport networks and human settlements are fundamental indicators of the sustainable development strategy, however, there are over 100 countries that currently lack the abilities to do urban planning and population statistics. In this case, fully automated methods with AI could be used to carry out urban extraction and obtain the mapping of roads, buildings, agricultural lands, and urbanization levels through satellite imaging. Moreover, the climate change crisis (SDG 13) is continuing unabated, which has been causing a larger likelihood of extreme weather. AI-based earth observation big-data analytics can be used for monitoring climate-change-induced disasters such as flooding and wildfires in near real-time. This project is about uncertainty-aware temporal change detection for satellite imagery, which aims to three main applications using satellite imagery including urbanization rate estimation, forest fire detection, and forest fire progression. In addition, the project would also involve theoretical fields such as uncertainty estimation and out-of-distribution detection. More specifically, this project will focus on developing new deep learning techniques based on recent advancements to solve the problem of change detection using satellite imaging, which will be also effective for other temporal data, such as medical imaging and drug discovery (SDG 3: good health and well-being). The basic idea of the project could be divided into three steps: - Employ segmentation frameworks to derive the probability maps of each time series of satellite imaging (per location) - Use recently popular methods such as contrastive learning or temporal consistency regularization to find the change point of each time series - Try to enforce the regularities in the representation space to make the representation insensitive to irrelevant changes, thus embedding the change detection method into the overall training architecture Compared to natural scene images, satellite images are often multimodal, geolocated, and large-volume, for example, Sentinel satellites have already acquired about 25 PB of data. Besides, satellite images usually contain various types of objects with different sizes, colors, and locations in a single scene, so we always need mid- and high-resolution satellite images. These challenges make our approach more computationally expensive, thus it needs more GPU RAMs when applying deep learning frameworks. More specifically, in this project, we mainly focus on the SpaceNet 7 (https://spacenet.ai/sn7-challenge/) dataset, which includes 24 images (one per month) covering ~100 unique geographies. The dataset has a mean ground sample distance (GSD) of 4m and contains over 10 million individual annotations of buildings. The reported training time from the winners of the challenge varies from 15h to 46h. Therefore, additional support from Berzelius resources could be a critical boost for our project. We expect that with the help from Berzelius, we could significantly speed up our development and publication.