Using A Photo to Build 3D Shape Reconstruction of The Cloud Images with Deep Learning
Title: Using A Photo to Build 3D Shape Reconstruction of The Cloud Images with Deep Learning
SNIC Project: Berzelius-2021-103
Project Type: LiU Berzelius
Principal Investigator: Lida Huang <>
Affiliation: Stockholms universitet
Duration: 2021-12-30 – 2022-07-01
Classification: 10207


Introduction: This project is a research in Generative Adversarial Network (GAN) by Lida Huang (Ph.D. student), Mirjam Palosaari Eladhari (Decent), and Sindri Magnússon (Associate Professor) in the Department of Computer and Systems Sciences, Stockholm University. Using the RBG channels to train generative networks is the most used approach for image generation. However, the Raw image contains more before it was compressed to the RGB format. The Raw image files capture the radiometric characteristics of the scene, for example, the physical information about the light intensity and color of the objects. The research question of this proposal is whether those additional data can contribute to the underlying 3D object structures. This project aims to examine a novel method to synthesize 3D cloud pictures via 2D data. In this project, we hypotheses that the dataset's diverse viewpoints and lighting variations can recover the 3D shapes of various objects. This project will train a GAN with a cloud image dataset in the raw image format and explore the unsupervised method to reconstruct 3D cloud shapes from the single 2D cloud pictures. We composed a dataset with 6000 raw image files of cloud images on the sky. Nowadays, the traditional method of synthesizing cloud images, in which the physical simulation engines synthesize realistic 3D cloud images in 3D, demands more rendering parameters and adjustments. A simplified 2D-to-3D generation method is helpful in the general air turbulence visual effects (e.g., clouds, smokes, fires) in the future. The main contributions of the project are threefold: (1) A novel 3D generation method for authentic cloud images based on deep learning methods; (2)A straightforward approach for painting 3D particle system subjects in amorphous structures and translucence, without complex physical simulation and parameter setting; (3) A novel dataset of volumetric clouds and reserve the alpha channel to maintain the characteristics such as density distribution, silverlining feature, and light flux in the clouds. The dataset will also be a future reference for artworks and climate studies. Results: This project will be compared with the other 3D generative methods, even though it has assumed that the target object is mirrored and the cloud is in a random and turbulent contour. The expected result of the project is to get an authentic cloud image 3D synthesizer of which results obtains more realistic images than the current cloud image works of literature.