Evaluating the use of contrastive learning for Domain Generalization
||Evaluating the use of contrastive learning for Domain Generalization|
||Johan Fredin Haslum <email@example.com>|
||Kungliga Tekniska högskolan|
||2021-10-22 – 2022-02-01|
The goal of this project is to validate our initial results suggesting the combining elements from Contrastive Learning and Domain Generalization (DG) results in SOTA performance in several DG benchmarks.
Domain Generalization is an important aspect in many machine learning tasks that is often overlooked in standard benchmarks datasets, such as ImageNet. The core problem in DG tasks is to learn domain invariant features, teaching methods to generalize to unseen domains under different levels of domain shift. This could be a self-driving car being trained on data from one country and then testing in another, or a medical imaging model being trained on data from one medical device and then used in another. In both these cases current models performance degrades and DG methods are focused on mitigating these effects.
In this project we suggest a new method for Domain Generalization, using ideas and methods from recent papers in Self-Supervised Learning. Our initial results indicate that our method can beat SOTA DG methods.
As mentioned this method has potential impact in a number of important problems such as, self-driving cars, medical imaging ML models being transferable between hospitals and imaging devices, better transfer learning capabilities etc. Further, this method has also shown promise in Data Driven Life Science problems such as increasing representational consistency between replicates of Cell Imaging experiments, potentially allowing for better representational power across experiments and modalities.
This request for resources is focused on generating significant results in three different DG datasets and ablate the different components of our model. By the end of the project we hope to have sufficient results to publish our method and findings at CVPR (mid november deadline).
This project is in its final stage and what is missing is the compute necessary to validate the initial results. We already have a PyTorch DDP codebase running in a Docker environment, which we have validated to work in multi-gpu settings on several V100s. We have also validated that our code can function with minimal tweaks in a Singularity container, therefore we believe that we can quickly and easily utilise the resources available on Berzelius.