On internal representations of random neural networks
Title: On internal representations of random neural networks
DNr: Berzelius-2024-259
Project Type: LiU Berzelius
Principal Investigator: Sebastian Mair <sebastian.mair@liu.se>
Affiliation: Uppsala universitet
Duration: 2024-08-05 – 2025-03-01
Classification: 10201
Keywords:

Abstract

One of the long-standing challenges of learning representations with neural networks is to efficiently leverage information from a few examples out of a massive data set to learn useful representations (https://arxiv.org/abs/2304.02549) with a significantly smaller computational budget. Another recent work of us (https://arxiv.org/abs/2406.04933) shows that a decent segmentation map can be extracted from a model pre-trained for classification and not segmentation. Those promising preliminary results suggest the possibility of extracting a segmentation map from an untrained neural networks. In this project, we want to investigate the random initialization of neural networks that lead to a more data-efficient learning while being useful for downstream tasks such as image segmentation.