On internal representations of random neural networks
Title: On internal representations of random neural networks
DNr: Berzelius-2025-79
Project Type: LiU Berzelius
Principal Investigator: Sebastian Mair <sebastian.mair@liu.se>
Affiliation: Linköpings universitet
Duration: 2025-03-01 – 2025-09-01
Classification: 10210
Keywords:

Abstract

One of the long-standing challenges of learning representations with neural networks is to efficiently leverage information from a few examples out of a massive data set to learn useful representations (https://arxiv.org/abs/2304.02549) with a significantly smaller computational budget. Other recent works of us (https://arxiv.org/abs/2406.04933 and https://arxiv.org/abs/2411.19700) show that a decent segmentation map can be extracted from a model pre-trained for classification and not segmentation. Those promising preliminary results suggest the possibility of extracting a segmentation map from an untrained neural networks. In this project, we want to investigate the random initialization of neural networks that lead to a more data-efficient learning while being useful for downstream tasks such as image segmentation.