Explainable machine learning methods for data-driven risk management
||Explainable machine learning methods for data-driven risk management|
||Marcus Häggbom <email@example.com>|
||Kungliga Tekniska högskolan|
||2022-11-21 – 2023-06-01|
The success of deep learning methods rests largely on their flexibility in adapting to complex data, but they often amount to a black box and require a large amount of data. As a result, pushing research pushing research in financial analytics towards explainable and robust learning methods is essential for the industry to adopt deep learning on a large scale.
The goal of this project is to develop and analyze explainable deep learning methods for applications to financial risk management. An initial model has shown promise in terms of generalization in capturing unconditional distributions, but questions remain about overfitting, especially when limiting the amount of training data. This problem can be formulated as a meta-learning problem. The methods that will be studied in this setting are compute-intensive and infeasible to run on a PC. Additional problem formulations include extending the model to conditional distributions. Furthermore, it is important to set this work on explainable methods in a context and perform comparisons with various state-of-the-art black-box models.
At the end of the requested six-month project period, it is expected that conclusive results in the meta-learning problem have been produced. Because of the explainable nature of the model, these results will not only shed light on the learning process of the model itself but will also provide knowledge about the data. Additionally, it is expected that experiments will have yielded vital insights when extending the model to conditional distributions.
The research is conducted in Python, using PyTorch with CUDA. The methods rely on automatic differentiation and gradient-based optimization methods.