Generative machine learning and amortized optimization
| Title: |
Generative machine learning and amortized optimization |
| DNr: |
Berzelius-2025-399 |
| Project Type: |
LiU Berzelius |
| Principal Investigator: |
Jens Sjölund <jens.sjolund@it.uu.se> |
| Affiliation: |
Uppsala universitet |
| Duration: |
2025-11-27 – 2026-06-01 |
| Classification: |
10203 |
| Keywords: |
|
Abstract
This project requests dedicated GPU compute and storage resources to support an integrated research program in numerical optimization, machine learning, and generative modeling. My group conducts methodological and applied research across three interrelated areas: (i) large-scale numerical optimization and learning-to-optimize methods; (ii) diffusion and flow-based generative models with applications in computer vision; and (iii) Bayesian experimental design with applications in materials science. The requested resources will thus support several PhD students and postdocs.
A central research theme is the development of amortized optimization frameworks that leverage graph neural networks to approximate or accelerate iterative solvers. These models require extensive experimentation across architectures, training regimes, and problem classes. Training and evaluating such systems involves repeated solution of high-dimensional optimization problems and large hyperparameter sweeps, necessitating reliable access to modern GPU hardware.
A second major activity is the study of generative models, including diffusion models and continuous-time normalizing flows. The group applies these models to inverse problems, image restoration, and multimodal representation learning. State-of-the-art implementations are computationally demanding, often requiring long training runs on large datasets and fine-grained ablation studies. Efficient experimentation is therefore dependent on sustained GPU availability.