Neural representations for image synthesis and analysis
Title: Neural representations for image synthesis and analysis
DNr: Berzelius-2026-6
Project Type: LiU Berzelius
Principal Investigator: Jonas Unger <jonas.unger@liu.se>
Affiliation: Linköpings universitet
Duration: 2026-01-09 – 2026-08-01
Classification: 10207
Keywords:

Abstract

Real-time rendering technology underlies all interactive 3D visual experiences on all displays across all modern computing devices and output modalities. Numerous industrially and societally important applications range from visualization of complex data, medical imaging and interaction with AI systems to digital product design and modern interactive entertainment. This project aims to address the key challenge of reducing the compute requirements for real-time rendering by creating new neural representations and rendering algorithms, and integrate them into systems capable of creating 3D images using Artificial Neural Networks (ANN). Neural graphics covers a wide range of uses of neural representation and compute models in computer graphics and visualization research. In this project we will focus in particular on the use of neural models within real-time rendering, where realistic 3D rendering at frame rates of under 16ms is typically expected, and inference must run at a very high sustained rate. While neural rendering has seen many advancements in recent years, from novel view synthesis using radiance fields (NeRFs) to 3D gaussian splatting, these techniques only work in limited situations, and lack the high quality and performance expected in high-end real-time rendering. To reach the expectations of high-performance realistic rendering, many challenges remain in improving lighting and materials for realistic images, which range from the complexity of physically accurate lighting and materials, to the challenges of getting the best performance from modern neural accelerators. The overall goal of this project is to systematically address these challenges to enable next generation rendering technologies. Our technical approach is centered on sparse neural graphics representing a novel combination of established computer graphics principles with state-of-the-art Artificial Neural Networks. The scientific premise is that the massive data and computational complexity inherent in photorealistic rendering can be represented using learnable models, particularly for global illumination and complex material models, and that these processes and data contain significant redundancies that can be exploited for real-time performance. We will utilize techniques derived from light field imaging, neural radiance field (NeRF) rendering, and 3D Gaussian splatting (3DGS). We plan to develop new theory and algorithms that fundamentally reduce computational load by representing scene information, shading, and lighting information using sparse representations and compact, low-latency ANNs and learned neural representations.