Learning Training Set Similarities to Improve Uncertainty and Interpretability
||Learning Training Set Similarities to Improve Uncertainty and Interpretability|
||Erik Englesson <firstname.lastname@example.org>|
||Kungliga Tekniska högskolan|
||2021-09-01 – 2022-03-01|
The goal of the project is to make deep neural networks (DNNs) more trustworthy. We want to achieve this by improving DNNs uncertainty estimates and make their predictions more interpretable.
We believe uncertainty estimation and interpretability are key challenges limiting the adoption of DNNs in real-world applications. It is therefore highly important to develop methods addressing these challenges.
In this project, we will be focusing on using DNNs to perform image classification. The key idea is to make DNNs behave more like nearest neighbor classifiers, which are easy to interpret (provide K-nearest neighbors from the training set for a test example), and are good at detecting Out-of-Distribution samples. We will be using PyTorch/TensorFlow to train our models.
This work will be published at a top-tier conference in machine learning, e.g., ICML, NeurIPS, CVPR, ICLR, AAAI, etc.