Deep Learning for Autonomous Driving
Title: Deep Learning for Autonomous Driving
DNr: Berzelius-2024-322
Project Type: LiU Berzelius
Principal Investigator: Lars Hammarstrand <lars.hammarstrand@chalmers.se>
Affiliation: Chalmers tekniska högskola
Duration: 2024-09-01 – 2025-03-01
Classification: 10207
Keywords:

Abstract

Our application is intended to provide GPU access to several of our ongoing research projects. These projects focus on a variety of topics where we seek to apply deep learning to problems related to autonomous driving. We are currently training deep neural networks for 1) neural renderings (NeRFs) on dynamic automotive scenes, 2) multi-object tracking for self-driving vehicles, 3) road estimation for self-driving vehicles, 4) deep learning-based sensor-fusion, and 5) end-to-end planning and motion prediction for autonomous driving. This application describes and motivates our needs for GPUs within these areas. Common for all these projects is that they aim to address key difficulties when designing an autonomous driving system. Multi-object tracking, road estimation and sensor fusion are all required for vehicles to perceive their surroundings, while end-to-end planning is a must for understanding other actors intentions. Many of these problems have previously been addressed with rule-based, handcrafted and model-based solutions. However, with the increase in compute power and available data, deep learning-based methods offer more scalable and robust alternatives. Developement of such methods does however require access to powerful GPUs. For neural renderings, the aim is to solve the task of novel-view synthesis in the real of automotive data. That is, given any, potentially unobserved, view-point, generate sensor-realistic camera, lidar and/or radar data. Such systems can be used to create synthetic data for training perception algorithms, or for running AD systems in closed-loop, opening for massive scale testing of such systems. However, currently such neural networks require large amount of compute (~1 GPU h for training on 10s of data, inference not included). The computational resources will be shared among ~12 researchers (professors, PhD students and Master Thesis students supervised by said PhD students) involved in these project and other related collaborations. GPUs will be used for training and evaluating neural networks (using PyTorch) for aforementioned problems.