Diffusion Based Generative Models for Causal Representation Learning
||Diffusion Based Generative Models for Causal Representation Learning|
||Stefan Bauer <email@example.com>|
||Kungliga Tekniska högskolan|
||2023-05-31 – 2023-12-01|
Score-based methods represented as stochastic differential equations on a continuous time domain have recently proven successful as a non-adversarial generative model. In particular they achieved new state-of-the-art performance on image generation while offering theoretical guarantees.
While in previous works we have achieve state of the art results for video prediction and infilling, we aim to extend our understanding of diffusion based generative models with a particular focus on causal representation learning and experimental design.
The goal of this project is to leverage diffusion based generative models to learn meaningful representations that generalize across different but related tasks. Current systems typically rely on training a new representation for each downstream task which results in higher energy costs for training and leads to poor transfer and generalization between different tasks
We aim to significantly extend our preliminary results and use the compute from the Berzelius cluster to scale the approaches to high-dimensional datasets, run the ablations on model architecture and prepare it for multiple submission to NeurIPS, ICML or ICLR.