Out-of-Distribution(OoD) Detection with Deep Generative Models(DGMs)
Title: Out-of-Distribution(OoD) Detection with Deep Generative Models(DGMs)
DNr: Berzelius-2024-57
Project Type: LiU Berzelius
Principal Investigator: Yifan Ding <yifan.ding@liu.se>
Affiliation: Linköpings universitet
Duration: 2024-02-12 – 2024-09-01
Classification: 10201


The majority of existing machine learning (ML) models operate under the closed-world assumption, presupposing that all test data originate from the same distribution as the training data, termed in-distribution (ID). This assumption, however, proves challenging to uphold in real-world scenarios. In practice, deployed models will invariably encounter previously unseen examples that diverge from the training distribution, known as out-of-distribution (OOD) samples. These encounters pose a significant challenge to the safety of ML models. Recent findings have highlighted challenges with using deep generative models (DGMs) for out-of-distribution (OOD) detection, illustrating that DGMs trained on datasets like CIFAR-10 or Fashion-MNIST unexpectedly assign high likelihood to unrelated SVHN or MNIST samples. Contrary to these findings, our research indicates that, under specific conditions, likelihood scores obtained from DGMs can indeed be dependable. Our project aims to explore this phenomenon through both theoretical and applied research, with a potential application in enhancing medical imaging systems.