Deep-learning data processing for photon-counting CT
||Deep-learning data processing for photon-counting CT|
||Mats Persson <firstname.lastname@example.org>|
||Kungliga Tekniska högskolan|
||2022-02-03 – 2022-09-01|
X-ray computed tomography (CT) imaging is a widely used medical imaging modality with around 100 million examinations per year in the US alone, providing healthcare with an important tool for diagnosis and treatment planning for a wide range of diseases such as stroke, cancer, and cardiovascular disease.
The latest improvement of this technology is photon-counting CT, which is based on a new, energy-resolving x-ray detector type that promises lower dose, higher spatial resolution, and improved characterization of tissue composition. A key problem in this field is to develop data processing and image reconstruction algorithms for photon-counting CT that are both able to make optimal use of the measured data and are sufficiently computationally fast to be adopted in clinical use. In recent years, deep-learning based image processing methods have been shown to be highly effective for improving image quality for conventional CT imaging in addition to being computationally inexpensive to apply once the training process is complete. In our preliminary results we have seen drastic improvement in image quality for denoising and artifact reduction tasks for photon-counting CT.
Our goal is to develop deep-learning based data processing methods for reduction of image artifacts and noise that can be integrated into the image reconstruction process and lead to improved photon-counting CT imaging. We will train convolutional neural networks on simulated CT scans of numerical phantoms generated from anonymized CT image datasets available online, using images corrupted by artifacts and noise as input and corresponding high-fidelity images as labels. Since the images are anonymized, the data will not contain any protected health information. We will evaluate performance by reconstructing images of test objects resembling human patients imaged with a prototype photon-counting CT scanner, based on technology developed in our research group, that is in operation at MedTechLabs at Karolinska University Hospital.
We have previously trained neural networks on personal computers and on the Tegner system at KTH PDC. As Tegner is being decommissioned and as we have found that we need to use large models requiring more memory (several tens of GB) than can be fitted in the GPU RAM of most graphics card, the A100 GPUs of Berzelius are well suited for this project. The default allocation will suffice for training efficiently on datasets up to 40 GB and evaluating multi-GPU implementations for larger datasets in a limited number of cases. If we find that more training capacity is needed for this research project, we may use our experience with the default allocation to apply for a larger allocation in the future.
The expected outcome is a proof of concept demonstrating that incorporating deep-learning image processing steps in the CT image reconstruction can play an important role in improving the diagnostic quality of photon-counting CT. This will be an important step towards our long-term goal of introducing a novel highly accurate photon-counting CT imaging technique in clinical practice, with large potential benefits to human health.