Using federated learning and synthetic images to facilitate training of deep networks for medical images
||Using federated learning and synthetic images to facilitate training of deep networks for medical images|
||Anders Eklund <email@example.com>|
||2022-10-01 – 2023-04-01|
Deep learning is currently revolutionizing many research fields. In computer vision, considerable progress has been made during the last 5 years, and a crucial resource is the ImageNet database which contains more than 14 million labeled images. Any researcher can use ImageNet as training data to improve methods in deep learning based computer vision. Techniques developed in computer vision are rapidly transferred to the medical imaging field, but a major constraint is that access to medical images is more complicated due to ethics and data protection legislation (e.g. the General Data Protection Regulation (GDPR)). There are a number of openly available medical imaging datasets, but they are much smaller compared to ImageNet. Health care providers have records containing vast quantities of medical images, but these records are often not accessible for research due to regulatory hurdles.
In this project, we will therefore develop methods for federated learning and synthetic images, to make it easier to train deep networks for medical image processing.
For synthetic images, we will develop 2D and 3D generative adversarial networks (GANs) to synthesize realistic medical images and volumes. As synthetic volumes are not attributable to a specific person, our hypothesis is that data protection legislation does not apply and the data can therefore be shared freely. The generated images and volumes can for example be used for training a classifier (e.g., classify a brain as healthy or diseased) or for training image segmentation algorithms. For image segmentation, realistic images or volumes as well as the corresponding ground truth segmentations / annotations will be synthesized.
As a parallel track we will also develop techniques for federated learning, where the sensitive data can stay at each hospital. A number of nodes perform training of local models, and then send updates of the weights to a master node. The master node combines the weights and send new updated weights to each node. Using this approach no sensitive data will leave each hospital, but new machine learning methods need to be developed.
Our European ASSIST project consists of partners in Sweden, Netherlands, Turkey and Belgium, and there are companies as well as universities. The main goal of ASSIST is to develop methods (image processing, deep learning) for radiotherapy treatment planning, to reduce the time it takes to create a treatment plan. We will within the ASSIST project collaborate on how to use synthetic images and volumes to train deep learning networks, and publish results as academic papers.