Deep learning for multi-object tracking
This research project aims to investigate and develop deep learning-based multi-object tracking (MOT) algorithms for different sensors, such as radar, camera and lidar. The goal is to push the state-of-the-art (SOTA) for a variety of problems that are of key importance to robotics in general and autonomous vehicles in particular. The three research questions this project will address are:
1. How should deep MOT be designed to perform well in the regime where traditional methods are currently considered the default solution? (WP1)
2. Which are the remaining pros and cons with deep MOT compared to traditional methods in the model-based regime, i.e., in situations where the assumed models are valid? (WP1)
3. To what extent can architectures for deep MOT be unified for radar, camera and lidar data, and which sensor-specific adjustments are needed to obtain SOTA performance? (WP2)
We will strive to answer these questions for both online and offline MOT, with the ambition to identify principles for handling data association, regression and track management that transfer elegantly across the different domains.
We would like to use Berzelius to train deep neural networks for multi-object tracking and related tasks, such asobject detection, trajectory estimation, etc. The training will typically be implemented using Python (using PyTorch) contained in singularity-containers. Most of the networks are likely to take on the order of days to train (single GPU usage), but some may take longer or need multiple GPUs. We will mostly use public datasets such as KITTI, NuScenes and ImageNet. During an initial phase of the project, we may also simulate data.