Learning Training Set Similarities to Improve Uncertainty and Interpretability
Title: Learning Training Set Similarities to Improve Uncertainty and Interpretability
DNr: Berzelius-2021-24
Project Type: LiU Berzelius
Principal Investigator: Erik Englesson <engless@kth.se>
Affiliation: Kungliga Tekniska högskolan
Duration: 2021-09-01 – 2022-03-01
Classification: 10207
Homepage: https://www.kth.se/profile/engless/
Keywords:

Abstract

The goal of the project is to make deep neural networks (DNNs) more trustworthy. We want to achieve this by improving DNNs uncertainty estimates and make their predictions more interpretable. We believe uncertainty estimation and interpretability are key challenges limiting the adoption of DNNs in real-world applications. It is therefore highly important to develop methods addressing these challenges. In this project, we will be focusing on using DNNs to perform image classification. The key idea is to make DNNs behave more like nearest neighbor classifiers, which are easy to interpret (provide K-nearest neighbors from the training set for a test example), and are good at detecting Out-of-Distribution samples. We will be using PyTorch/TensorFlow to train our models. This work will be published at a top-tier conference in machine learning, e.g., ICML, NeurIPS, CVPR, ICLR, AAAI, etc.