Improving and Applying Domain Generalization approaches for High Content Imaging Data
||Improving and Applying Domain Generalization approaches for High Content Imaging Data|
||Johan Fredin Haslum <email@example.com>|
||Kungliga Tekniska högskolan|
||2023-11-01 – 2024-05-01|
High Content Imaging datasets are powerful tools for understanding the complex relationship between treatments and phenotype, enabling better biological understanding of cell systems and driving progress in drug discovery and precision medicine. However, extracting useful information from such datasets is notoriously challenging due to factors such as scale, phenotype complexity, and limitations to replicate similarity.
Feature extraction method are currently only able to uncover parts of the information thought to be contained withing such datasets. While Deep Learning method from the natural imaging domain have improved significantly in recent time, this progress have not transferred to the High Content Imaging (HCI) domain at the same pace. Inherent challenges not often encountered in standard imaging benchmarks pose a significant challenge.
One particularly pronounced challenge within HCI data is batch effects, a type of domain shift. Our previous work on the topic introduced a novel way of limiting the impact of such domain shift.
The goal of this project is to improve our previous work on novel Domain Generalization methods for HCI data. By leveraging new consistency based approaches in combination with architectural novelties. We intent to provide an improved strategy for training feature extractors for datasets suffering from domain shifts in general and HCI datasets in particular.
Following a previous Berzelius grants, this is a continuation. Were we intend to finish the work started in the previous grant. We are now intending to fully explore the develop method and the application of it in larger microscopy datasets. With the intention of showing it's applicability in relevant drug discovery datasets.