Deep learning methodologies for automatic ECG classification
Abstract
Electrocardiograms (ECGs) are a standard, life-saving test, yet their interpretation and classification still rely on clinicians’ expertise. End-to-end deep learning has been shown to enable an automated ECG classification
pipeline that outperforms medical students (https://www.nature.com/articles/s41467-020-15432-4). Many challenges, however, still need to be addressed.
We focus on three main challenges. The first of these challenges concerns the applicability of these deep learning-based automatic classification pipelines to other types of exams, like Holter monitoring, where the length of the traces poses significant challenges—in terms of computational scalability and feasibility of real-time classification—to traditional deep learning architectures. We aim at addressing this challenge by building on the Selective Structured State-Space Models architecture that offers enhanced scalability to extremely long sequences.
A second compelling problem is the variability of lead configurations that can be used to record electrocardiograms. For example, wearable devices record 1- or 2-leads ECGs, while hospital-grade electrocardiographs generally record 8- or 12-leads ECGs. We aim at developing an architecture capable of handling these different configurations at both training and inference stage.
Another relevant aspect of automated ECG classification is generalization, i.e., whether and how well a model pretrained on large patient datasets can transfer to other cohorts. The first research question is which of the three architectures, convolutional neural networks, state-space models, and transformers, achieve superior performance on ECG classification tasks and demonstrate stronger transferability. The second concerns how to enhance the performance of transformers so that they can serve as robust foundation models for ECG analysis. To address these questions, we aim to compare different architectures on the same set of public ECG datasets, investigate their scaling behavior with respect to both model parameters and training data, and rethink the role of tokenization in transformers by proposing novel transformer–tokenizer designs tailored for ECG classification.